Issuu on Google+

Plugged in, yet disconnected Analyzing the future of Internet use and the digital divide

By Brook R. Corwin Oct. 28, 2009


Introduction: the Internet’s rapid rise Following its inception in the mid 20th century, the Internet serviced an extremely limited group of users. Decades passed before the user group expanded beyond military officials and scientific researchers. Even as it became accessible for a mass audience in the late 1980s and early 1990s, the Internet and the personal computer reached only a small subset of the population, one that was highly educated, mostly male and privileged to the extent of being one of the select few with access to online networks. Momentum eventually turned to the masses, however, at least within developed nations. Graphical user interfaces and more advanced operating systems made it possible to utilize computers even with limited to no experience in programming. Barriers to Internet access began to fall while the learning curve for utilizing the technology also dropped precipitously. To this day, Internet use has risen at an incredibly fast pace. In developed nations, the Internet is now ubiquitous, and it is relied upon for essential everyday tasks such as conducting business transactions, mapping directions, researching information, making reservations, sending correspondence and even finding health advice. Developing nations are also seeing rapid increases in Internet use, particularly in cities, while select rural areas have been targeted by government or non-profit entities with a mission of connecting those regions to a high-speed network. There are still billions without Internet access, but far more are being added than lost. At this rate, it will soon be impossible to compete as a business or even as an individual employee without a high-speed Internet connection. Given this ongoing trend through the decades for expanding Internet access and advances in computing technology, some have postulated that in the near-future Internetconnectivity will be universal, a right enjoyed by all regardless of location or education level. For many this would represent the end of the digital divide, a long-discussed issue referring to the gap between those connected to technology and those who lack those resources. Given the tremendous energy expended by some to expand high-speed broadband networks, and the absolute necessity for such infrastructure in order to compete economically, it’s feasible to conclude that the age of divided access will soon be a problem of the past. But will universal Internet access truly equate to universal benefits? Studies on Internet use reveal a wide range of ways that people utilize the technology, and not all have a positive impact on their livelihood. Once the divide on Internet access is bridged, there are no guarantees that all of society will be able to afford the tools necessary to maximize the Internet’s potential, or that they will possess the education needed to make sense of how the online world can enhance the physical world. Will population segments log onto the Internet for the first time only to waste their hours with games or absorbing inaccurate information? How many will actually incorporate web tools into their education, employment and personal tasks? Will navigating the Internet in the future require not just high-speed access but also expensive hardware and software only a select few can afford? This paper will briefly survey the remarkable pace of rising Internet connectivity, while

2


also examining the barriers that remain and the efforts underway to ensure universal access. It will then look at the more intangible factors of the digital divide, those not easily measured by usage rates or network speed, to see how issues of education and affordability might still create a sizable gap between the technology haves and have-nots even in a world where everyone is equal in terms of connectivity.

A steady stream of new users After relatively slow growth during its first few decades of existence, the Internet’s reach has expanded at an incredible pace across the globe. It continues to pick up millions of new users by the week, with the most rapid growth in regions of the world where Internet penetration is lowest. The most recent world Internet usage statistics, as tabulated this year by Miniwatts Marketing Group, show a 362 percent growth in the number of Internet users across the world from the years 2000 to 2009 (Internet World Stats). Africa and the Middle East have the fastest growth during the period, each expanding their tally of Internet users by more than 1,300 percent. For the entire world, Internet access is a little less than 25 percent, a low figure given its importance in developed regions. But the rates show that this figure is likely to rise and keep rising in the near future, particularly as Asia becomes more developed and connects its sizable population. Asia has the most Internet users with 704 million, yet that figure represents less than 19 percent of its population. The continent’s most populous nations, China and India, are rapidly becoming more developed, which statistics show is a strong predictor of future Internet penetration. Even in North America, the most developed and connected continent in the world, there remains 26 percent of residents who don’t use the Internet, indicating plenty of room for growth. On a global level, it is undeniable that the trend is towards more access to more people. The graphics on the next two pages illustrate Internet use by continent and demonstrate how the fastest growth is occurring in areas with the smallest current penetration, an indication of more growth to come.

3


4


5


A more detailed analysis of Internet access shows that the rate of growth isn’t universal among all demographic groups. While some groups have quickly achieved access, others have lagged behind in their adoption rates. A large body of research shows industrialization and median income of a nation as a strong indicator of its IT penetration. A 2005 study of Internet access in 40 countries over a 16-year span (Across the Digital Divide) shows that income closely correlated with IT penetration, yet the study also documented evidence that the divide is narrowing in terms of PCs, mainframes and the Internet. Much of this is due to advancements in technology that enable broader levels of access. A similar study done a year earlier (The Determinants of the Global Digital Divide) looked at a number of demographic and socioeconomic factors in Internet and computer use in 161 countries. It found that age, education levels and income were statistically significant variables correlating with Internet use. Gaps in Internet use that fall along racial and ethnic lines are also documented. A recent survey conducted by the Pew Internet & American Life Project honed in on the age gap (Broadband Now). Its findings demonstrated a considerable generational divide, with just 30 percent of Americans 65 and older using broadband Internet, compared with 77 percent for 18-29 year olds. Other reports focus on race and ethnicity in documenting divisions in use. A 2009 study of students at 40 U.S. institutions of higher education (U.S. College Students’ Internet Use) found that minorities were significantly less likely to use the Internet for their studies. In 2007, the Pew Internet & American Life Project looked at Internet use among Hispanics and blacks within the general U.S. population (America’s Digital Divide Narrows). Both groups lagged behind whites by at least 10 percentage points, but the gap was narrowing. A 2008 study comparing the Internet technology use of immigrants in the U.S. and natives also presents evidence that minorities lag behind Caucasian counterparts when it comes to this technology (Immigrants, English Ability and the Digital Divide). The study showed that English ability, and whether the language was spoken at home, was a key factor in Internet use. Such research indicates that if universal or near-universal Internet access is ever achieved, all groups will not reach it simultaneously. Statistics back up this theory, particularly with regards to income. The results of a 2007 study by the University of Washington show that only 37 percent of students whose families make less than $20,000 had Internet access at home, compared to 88 percent of students from families making more than $75,000 (A New Ruler for the Digital Divide). The 2007 Pew study shows that 90 percent of college-educated adults regularly use the Internet, regardless of ethnicity and race (America’s Digital Divide Narrows). Based upon these findings, it’s likely that certain demographics will be the first to achieve universal Internet access, while minorities, immigrants and those with less income or education will take considerably longer to reach this status. The generational gap may take longer to bridge, as current senior citizens may never adopt the Internet in large numbers. As this generation dies out, however, it will be replaced by a new generation that is already largely familiar with the Internet and its many uses.

6


Disparities in the quality of access Measuring access isn’t as simple as answering a yes or no question. The difference between a slow dial-up connection and a high-speed broadband connection is incredibly influential in how and for what purpose the Internet can be used. Even if an entire nation has Internet access, differing connection speeds segregate groups by the tasks they can perform online. The University of Washington study pointed out the flaws of measuring Internet access through an all or nothing model, arguing that a more sophisticated approach is needed that takes download speeds into account (A New Ruler for the Digital Divide). Simple emails may work for slow dial-up networks, for instance, but these connections effectively render online video, audio, animation and high definition graphics as off-limits. With most websites now including these components, anyone without the bandwidth to access them is effectively divided from the general public. Recent research makes it clear that there exists a wide disparity in connection speeds even within developed nations such as the U.S. Population density is the biggest factor in this disparity, as rural areas offer little incentive for private Internet service providers to build the necessary infrastructure. The profit margins are small or non-existent to build high-speed networks if only a few thousand people will utilize them. A recent report by the Communications Workers of America measures this disparity. It found that the U.S. as a whole lagged way behind many other developed nations with an average download speed of 5.1 megabits per second (Internet Speeds Vary Across the USA, Leaving a Digital Divide). States in the northeast or mid-Atlantic typically had speeds faster than this average, led by Delaware’s 9.9 megabits per second. But in southern states such as Arkansas, Mississippi and South Carolina the average speed was below 4 megabits per second. Western states also fared poorly; with Alaska and Idaho the two slowest states at download speeds of 2.3 and 2.6 megabits per second, respectively. Across the country, only 38 percent of rural households have broadband access, compared with 57 percent of urban households and 60 percent of suburban households. Only 20 percent of households had speeds in the range of the top three ranked countries for broadband connectivity — South Korea, Japan and Sweden. The graphic on the next page illustrates the considerable differences in download speeds by state.

7


While some may debate weather universal access is an inevitable destination or a daunting challenge to undertake, few will argue that Internet connectivity is absolutely essential in order to compete on equal footing in the workplace. Many also emphasize the wealth of non-professional tasks and functions migrating to online, making it essential for even a stay-at-home parent to utilize the Internet in order to live a productive and fruitful life. Without the Internet, an individual or organization is at a severe handicap on several levels, a gap that will only grow in magnitude as access becomes more common. These were among the conclusions reached by a 2005 report conducted by the deputy prime minister of the United Kingdom (Inclusion Through Innovation: Tacking Social Exclusion Through New Technologies). Rural school systems also suffer, unable to teach basic computer skills because some of their schools lack broadband connections. A 2008 study looking at race and gender differences in Internet use as a predictor of academic success found that those who actively utilize the technology were significantly more likely to do well academically, indicating the importance of the digital divide as a educational issue (Race, Gender and Technology Use: The New Digital Divide). The digital divide also makes an economic impact. The Communications Workers of America report estimates that for every $5 billion invested in expanding broadband infrastructure, more than 97,000 new jobs are created in the telecommunications, IT and computer sectors (Internet Speeds Vary Across the USA, Leaving a Digital Divide). Rural

8


communities struggling to recruit new businesses and industries can’t lure new companies if there’s no established broadband network in place for them to utilize, since that level of connectivity is essential for businesses to compete in a national and international marketplace. It’s no coincidence that the demographic groups identified by Pew as most lacking in Internet access — older, less educated, of minority status — closely correlate with the demographic groups that have the highest unemployment rates. When Internet access is improved, on the other hand, new choices and opportunities emerge for the most distressed communities and individuals. Washington State University recently implemented a digital inclusion project that used grants to implement technology centers in some of the state’s most rural areas (Division of Governmental Studies and Services at WSU: Digital Inclusion). Its research in tracking how the centers were utilized showed that a majority of users leveraged the technology for educational purposes, even more so than social purposes. A significant portion also used their access to search for employment, fill out an online application, or learn new software skills vital to the workplace. Many took online classes, workshops or tutorials. Some even completed entire diploma, certificate or degree programs. These tasks, all absolutely essential to remaining economically competitive, would have remained off-limits without the public resources to bring Internet connectivity to places where it didn’t exist. It’s this sentiment that fuels passionate and highly organized initiatives to bring broadband access to the places where it’s needed the most. In developed nations such as the United States and those in Western Europe, that means reaching rural areas currently lacking in access. According to Connected Nation, a Washington D.C.-based non-profit organization, a 7 percent increase in broadband penetration could stimulate the economy by more than $134 billion (Rural Americans Long to be Linked). Benefits would accrue from the creation of new jobs, more efficient e-commerce transactions and reduced travel expenses. "As a country, we're basically punishing people for living where they want to live," Vince Jordan, CEO of Colorado-based Internet carrier Ridgeview Telephone, said in an interview for the USA Today article. Today Jordan estimates it’s not uncommon for service providers to charge several hundred dollars for broadband instillation, pricing out many rural residents, compared to free instillation in most urban markets. With the private sector disinterested on tackling the issue of rural broadband on its own, government entities and non-profit organizations are stepping up to meet this critical need. The Federal Communications Commission is providing an overall framework, and much of the monetary resources, for this effort. Earlier this year the FCC announced that $7.2 billion of the American Recovery and Reinvestment Act is set aside specifically for projects that expand broadband access. The FCC is also working on a report to Congress, to be delivered early next year, on how to reach the ambitious goal of 100 percent broadband connectivity in the United States. Many of the funds are targeted for community-based Internet providers knowledgeable about the needs of their particular markets. The executive director of one such provider, Wally Bowen of Mountain Area Information Network in Asheville, NC, called this approach “historic” in a recent interview with “Democracy Now” (Bridging the Rural Digital Divide: FCC Starts Work on National Broadband Strategy). Bowen says non-profit organizations are uniquely equipped to maximizing the reach of broadband access since they are motivated primarily

9


with the overall health of their communities. “The $7.2 billion … is just an historic opportunity for the creation of local networks,” Bowen said. “It’s moving away from the kind of absentee-owned networks that we’ve suffered under all these decades that are beholden to Wall Street and not beholden to our communities.” Bowen said advances in technology are making it much easier for community-based telecommunications companies to deliver high speed Internet. The financial incentives offered through this part of the federal stimulus package could encourage more to begin offering this service, thereby improving the odds that even the most rural areas will have a company committed to giving them broadband access. “It’s just critical that we find alternative revenue streams, and here comes along the stimulus package, which is presenting that opportunity,” Bowen said. “One thing I’m concerned about is that—a lot of our colleagues in the media reform movement think that becoming an ISP is like rocket science. It’s not.” The European Union has also embarked on a similar initiative this year, outlining regulations that would provide public aid for IT infrastructure projects in areas where broadband access does not exist (EU Seeks to Close Digital Divide with Broadband Aid). Urban areas with multiple Internet service providers would have a much more difficult time receiving this aid. Recipients of the aid could not favor any particular technology and would have to allow access for competitors to its networks. There has been much debate on whether excessive regulations will stimulate or choke private investment on the continent. In a news analysis article published earlier this year, telecommunications industry consultant Keith Mallinson argues against excessive licensing fees for broadband expansion projects (Digital Dividend Bounty can Close Digital Divide). Mallinson said the continent is ripe with plenty of competing service providers that can drive down costs and rapidly expand networks, with or without incentives, provided the regulations imposed by the EU don’t make the effort cost prohibitive. The 2005 study of Internet use in 40 different nations (Across the Digital Divide) recommends policymakers giving free reign to IT innovators, including reductions to fees and tariffs along with deregulation. Like the FCC, the EU has stated a goal of universal broadband access within the next few years. This initiative could prove a daunting task. A commission charged with studying the issue estimates that it will cost between $295 and $443 billion to create all the necessary networks. Yet it’s a target that many European leaders seem committed to reaching. In Finland, for example, universal broadband access will be implemented by a law that goes into effect next year, making the nation the first to recognize Internet connectivity as a right for all citizens (Broadband Now).

New technologies boosting connections Thankfully advances in telecommunications technology are making it easier to reach rural areas without installing a massive amount of fiber optic cable. Wireless networks can be delivered through a variety of channels that are cheaper and easier to deploy than cables (Rural Ops Bridge the Digital Divide). Mallinson calls the mobile networking industry a rare “bright spot in the current economic gloom.” Among the most promising of these is the use of “white space,” empty fragments of the spectrum used for broadcast

10


television that are scattered between frequencies. More of this space has recently become vacant with the switch from analog to digital television, opening up a new door for delivering broadband access (First “White Space” Network Launched). These frequencies often are capable of traveling over a much greater distance than those used for traditional wireless Internet. They are still regulated by the FCC, however, since improper use can interfere with existing broadcasts. Up until last year, the FCC outright prohibited use of these frequencies, only just now opening them up on a case-by-case basis. A concerted, cooperative effort linking state and federal governments is therefore necessary to put white space to use as a provider of broadband connections. Such an effort took place this year in Patrick County, located in a rural section of southwest Virginia. A special task force that had unsuccessfully worked for years to convince private Internet service providers to come to the county was able to solidify an experimental license with the FCC for “white space” frequencies, with a big assist from local Congressman Rick Boucher. Grants are paying for a service provider, Spectrum Bridge, to utilize the network for delivering high speed Wi-Fi at no cost to residents. Heading up the initiative at the local level is Roger Hayden, chairman of the Patrick County Broadband Task Force. In a recent interview, Hayden said the effort is one of absolute economic necessity for his community. “High Speed Internet connectivity gives us the tool. Without it we will be left behind in jobs, education and quality of life,” Hayden said. “If we do nothing but wait for something to fall into our laps, then we will fail. Living standards have suffered due to our failure of providing high-speed Internet to all our citizens.” If the network continues to operate without glitches, it could serve as a model for other rural communities to implement this often talked about and now practiced option for wider high-speed access. “The white space wireless I believe is and will be an excellent way to provide connectivity,” Hayden said. “In the near future if the FCC goes along with the white space technology spectrum, then we should see a revolutionary change in speed and connectivity.” The advances in telecommunications technology aren’t just limited to infrastructure and connectivity. The hardware needed to access the Internet is becoming faster, cheaper and more mobile each year. The blending of these two paths of technological progress has resulted in explosive growth in data networks accessed by smart phones and other small, mobile devices. These networks and the equipment that taps into their power can be ideal for rural areas where installing the physical infrastructure is cost-prohibitive. A 2003 report issued by the United Nations, written for the World Summit on the Information Society, repeatedly emphasizes the potential of wireless Internet to reach the most rural areas (Closing the Digital Divide: What the United Nations Can Do). The report cites a pilot program in the medieval town of Zamora, Spain, which connected 68,000 users at half the cost of dial-up access and many times its speed. In the years since the report, wireless technology has opened up many new options for Internet access. While smart phones able to access the Internet are still used by a minority of the U.S. population, they are becoming increasingly common and more affordable. A 2009 study by the Pew Internet & American Life Project shows than one-third of Americans have a mobile phone used for accessing the Internet to seek information and send email. In just 16 months, the daily use of these devices has grown by 73 percent (Wireless Internet Use). This has pushed network carriers to expand the reach of their services so customers can

11


remain connected as they travel. It has also led to significant growth in the sale of netbooks, mini laptops that are easily portable with the capability of accessing the Internet. While these devises lack the computing capability to run complex software, they do open up new opportunities for Internet connectivity regardless of location, all for an affordable price. When Susannah Fox, Pew’s associate director for digital strategy, testified in Washington for a forum held as part of this year’s One Web Day events, she stressed the considerable potential for mobile devices to be the future of Internet usage (A Public Interest Internet Agenda). Fox noted that 56 percent of Americans already utilize wireless Internet on some device, be it a laptop, mp3 player, game console or cell phone. “Keep your eye on mobile adoption since ‘always connected’ citizens are likely to be at the forefront, navigating the new health care delivery system and taking advantage of opportunities for political participation,” Fox said. “Pew Internet research shows that mobile could be a game-changer, but only for those who get in the game.” The graphic below illustrates some of Pew’s findings with regards to wireless Internet, showing significant growth in the use of mobile devices.

12


Years before the netbook was marketed commercially to the general public, the concept was developed to connect children in some of the world’s most remote areas to the Internet. The non-profit organization One Laptop per Child has placed more than 1 million mobile computers in the hands of children in 31 countries, and much of its success coincides with their pioneering design of a low-cost, low-powered machine ideal for mass distribution to impoverished areas. In a recent profile of Mary Lou Jepsen, the organization’s original chief technology officer (The Pioneering Designer of the First Cheap Laptop for the Developing World), this design is credited with starting the current netbook craze. Jepsen is quoted in the article touting the computers as versatile portals to global knowledge. “The world’s information is digital,” Jepsen said. “The web, the news, all of that is digital. And now . . . we have ten million books scanned. That was the last bastion of what was offline; it’s now online and accessible.” The price of these computers is already below $200 each and poised to go much lower, putting the technology in place to not only give the entire world broadband access, but also give everyone the tools needed to leverage the Internet to their advantage. The organization put the state-of-theart laptops at the center of its campaign for bridging the digital divide, noting that they have screens specially designed to remain visible even when used outside in bright sunlight. The entire device utilizes just 2 watts of power, roughly equivalent to what can be generated by upper body strength. Perhaps most impressively, the machines can network with each other, so even if any number of them loses their Internet connections, the user won’t be kicked offline so long as a nearby computer remains connected. Nicholas Negroponte, the organization’s executive director, addressed these developments during a speech at a TED conference in late 2007 (Negroponte on One Laptop Per Child, Two Years on). Negroponte said these computers — through their internal networks — could overcome the hazards of remote conditions that once blocked Internet access for undeveloped countries. “When we drop these laptops into the world, they’re connected,” he said. “If you’re in a desert, they can talk to each other from up to two kilometers apart. In a jungle it’s 500 meters away. You don’t call Verizon or Sprint. You build your own network”

The glaring educational gap For all the much-deserved praise heaped onto organizations big and small that expand Internet connectivity and access to computer technology, there is justified criticism that these efforts only address part of a much more complex equation. One Laptop per Child’s efforts were initially derided by the likes of Steve Jobs and Bill Gates along with numerous public officials for focusing on technology only as a resource and not a skill that needs development (The Pioneering Designer of the First Cheap Laptop for the Developing World). With enough effort and resources, it is possible to ensure that everyone has a high speed Internet connection and a computer, but that doesn’t automatically mean that everyone will utilize the technology to their benefit. Basic computer literacy is far from universal, particularly among certain socioeconomic groups. Even those who can operate a computer may lack knowledge in web-based tools and 13


techniques in conducting business online, sorting through information on the web or communicating online. Mobile devices, often cited as important tools in bridging the digital divide, are often extremely difficult to use for those unfamiliar with technology. A recent study by the Pew Internet & American Life Project shows that only 39 percent of adults have positive feelings about their mobile devices. The remaining 61 percent have trouble coping with all the features and options of their mobile devices, including broadband Internet connections (Pew Highlights Digital Divide on Mobile Devices). Unproductive use of the Internet may be simple, but effectively leveraging the technology for personal or professional advancement requires a degree of experience and expertise that goes way beyond simply having access to a computer and Internet connection. A 2007 study comparing Internet use among students in high and low resource schools demonstrates this educational divide (Redefining the Digital Divide). All the schools in the study gave their students access to computers and the Internet. But only the high-resource schools matched that access with teachers that integrated the technology into the classrooms. These innovative teaching strategies, coupled with the fact that a greater percentage of students in the high-resource schools had Internet access at home, made for more actively engaged students in the technology. There is ample evidence that children from disadvantaged economic backgrounds are far less likely to have access to Internet access at home. In a 2008 article (Bridging the Digital Divide) several scholars on the issue argue that this results in gaps of IT knowledge regardless of resources at school. “Digital inclusion is not simply about access to technology, but also meaningful access, technical skills and information literacy,” the article states. The 2007 Pew Hispanic Center study, while noting a narrowing gap between whites and minorities in Internet access, makes it a point to demonstrate how many Hispanics are missing out on the full range of the Internet’s benefits (America’s Digital Divide Narrows). As opposed to using the Internet to create and shape content, many in this group can only use it to access information either because they lack the expertise for more hands-on engagement or because their Internet connection is too slow to support multimedia interactivity. Pew Associate Director Susannah Fox refers to this as the “digital dimmer switch,” a new way at looking at the digital divide that acknowledges that many online are still stuck on Web 1.0. In their comprehensive survey of previous research into the digital divide, compiled in 2005, Frederick Riggins and Sanjeev Dewan note the relatively little attention given to inequalities on ability to use Internet technology among those who have access (The Digital Divide: Current and Future Research Directions). This creates three kinds of gaps: the individual level where some are disadvantaged compared to others, organizational level where some companies can’t compete economically, and the global level where entire nations are handicapped because they lack expertise in utilizing technology. The authors view the digital divide as encompassing not one but multiple divisions, including a skills divide, information divide, economic opportunity divide, democratic divide and ecommerce divide. These gaps, they reason, aren’t automatically solved with wider access. Programs or websites designed to reach a particular ethnicity or socioeconomic group may not translate universally, with the unfamiliarity of these interfaces alienating minority groups and actively discouraging future Internet use. Barriers also arise when

14


the only Internet access available is through public places such as schools and libraries. These settings are often not ideal for engaging users on tasks such as e-commerce, networking and political participation where privacy is an important consideration. “Providing public access to PCs and the Internet through schools, public libraries, and community centers is considered one of the most relevant approaches to bridging the digital divide,” the authors sate. “However, it is not clear how effective this approach is for actually overcoming many of the barriers for the disconnected.” A similar conclusion is reached by a 2006 study on the measurements used to quantify the digital divide (Gaps and Bits: Conceptualizing Measurements for Digital Divides). This report is sharply critical of policymakers who rely on simplistic measurements such as Internet penetration, instead arguing that a more comprehensive analysis is needed to examine how the Internet is used by different families within the context of their unique political and social environments. The authors of the 2005 study recommend further research that targets how individuals and businesses use technology, not just whether they have access (The Digital Divide: Current and Future Research Directions). Also needing closer examinations are the policy implications of programs attempting to address the digital divide, and whether they are improving the skill set needed for productive use of Internet technology. Individual case studies show that technology without accompanying education produces a digital divide almost as profound as when the technology was never available in the first place. Among the most compelling of these cases comes from an article published by Mark Warschauer (Demystifying the Digital Divide), in which the distinguished communications scholars describes a program to construct Internet kiosks in extremely rural areas of India. Some of these kiosks were poorly maintained by technicians, did not have instructions in Hindi, and/or came with no instructions for use. These ended up making little impact among the villages, and some were rarely used. But when the kiosks were regularly serviced and loaded with content created based on an analysis of the community’s social and economic needs, results improved tremendously. Farmers were able to learn the updated prices or popular crops so they knew what to grow and harvest, while villagers could enjoy improved government services by being able to instantly make complaints and requests using the kiosk. Warschauer also provides a compelling example of online advanced placement courses established in California to service students in low-income areas whose schools couldn’t afford to offer the classes. With most of the students in these areas having little experience in computers, most struggled and dropped out of the courses. After the program was revised to incorporate face-to-face instruction on technology, the success rate tripled. Further research indicated that students were using their Internet connections at school for different purposes based upon their socioeconomic status. Poor students typically engaged in less challenging computer exercises that didn’t stimulate their understanding of the technology and were not as good in preparing for professional use of the Internet. Wealthy students, on the other hand, used the computers for experiments and critical engagement, foreshadowing a deeper level of expertise with the technology that will serve them better following graduation. From this considerable body of research, Warschauer concludes that simply dropping technology into an impoverished area isn’t nearly enough to bridge the digital divide.

15


“Realizing this objective involves not only providing computers and Internet links or shifting to online platforms but also developing relevant content in diverse languages, promoting literacy and education, and mobilizing community and institutional support toward achieving community goals,” he states. “Technology then becomes a means, and often a powerful one, rather than an end in itself.” The focus on education can’t begin and end with those at the primary or secondary school level, particularly with so many technologically illiterate adults seeking fulfilling employment in a digital age. Expanded Internet access must be accompanied by adultlevel training to maximize the technology and lift workers up the knowledge ladder. This point is emphasized by Dr. Esther Johnson, national director of the federal Job Corps program. Johnson was recently named a “Champion of Digital Literacy” by the Certiport Corporation, which annually honors the achievements of individuals whose commitment and efforts have brought about the adoption of digital literacy standards. In a video interview conducted in conjunction with the honor, Johnson talks about the absolute necessity of integrating digital certificate programs within the overall framework of her agency’s mission (Certiport Champions of Digital Literacy 2009). “We’re in the 21st century. What worked for the Job Corps program when it started doesn’t work anymore. We had to bring our young people’s education and training standards up,” Johnson said said. “It’s so important that they are technologically astute and keep their skills upgraded. That’s what’s so important. Software is changing all the time and what you learned 5 years ago is not applicable anymore.” Johnson was honored for implementing a rigorous certification program in computer software and Internet technology as part of the Job Corps program. Johnson said the stakes are too high, and the job market too much in flux, to rely solely on old approaches to training. Students not only must grasp the current software and be able to apply it to professional tasks, they must also develop the capacity to quickly pick up new kinds of software and adapt to changes in technology even after their careers are established. “I never imagined that the world of work would change so much in 20-plus years as a result of technology. When they get out in the workplace, they’re going to have to use the applications of Microsoft Office,” Johnson said.” Out in the workplace, where an increasing number of jobs are web-orientated, it’s painfully evident that a large segment of the population still lacks the skills and training to effectively maximize Internet resources. Nick Deamons, a web developer for national marketing firm Engauge, views such knowledge as a social imperative that requires comprehensive public action in order to keep adults and children alike from being left behind. “A high disparity of basic computing knowledge already separates American people into have and have-nots.” Deamons said in a recent interview. “If the main concern here is equality of knowledge and societal capability, then neglecting to provide basic services to the have-nots is an essential task for completion. Some would argue that the dissemination of information across many peoples is something to be regarded as a cultural necessity for equality.” In tacking the education question, One Laptop per Child places the focus on welldesigned software that encourages active participation by the user. As the organization states on its website (www.laptop.org), children learn most from doing, as evidenced by research from noted epistemologists. “Thus OLPC puts an emphasis on software tools for

16


exploring and expressing, rather than instruction,” the organization states. “Love is a better master than duty. Using the laptop as the agency for engaging children in constructing knowledge based upon their personal interests and providing them tools for sharing and critiquing these constructions will lead them to become learners and teachers.” Negroponte addressed the issue in more forceful terms during his TED speech, saying that active engagement by children with their computers constitutes in itself meaningful instruction (Negropante on One Laptop Per Child, Two Years on). “Kids who write computer programs understand things differently, and when they debug the programs they come the closest to learning about learning,” he said “In some sense we’ve lost that. Kids don’t program enough. If there’s anything I hope this brings back it’s programming to kids. It’s really important. Using applications is okay but programming is absolutely fundamental.” However, it’s unclear whether self-directed learning on its own is enough to help an entire generation of educationally impoverished children catch up with their counterparts in the developed world. Basic computer literacy, much less computer programming, needs a starting point and a solid educational foundation in order for children to be guided by their own curiosity and intuition. In criticizing the organization’s aims in 2007, then Nigerian education minister Dr. Igwe Aja-Nwachukwu offered a blistering assessment that nonetheless must be taken to heart for any effort that tackles the digital divide simply through the lens of Internet and computer access (The Pioneering Designer of the First Cheap Laptop for the Developing World). “What is the sense of introducing One Laptop Per Child when they don’t have seats to sit down and learn,” Aja-Nwachukwu said, “when they don’t have uniforms to go to school in, where they don’t have facilities?” There are no easy answers to such questions, particularly since each community has its own set of technological and educational needs that require personalized solutions. Roger Hayden, who spearheaded the effort to establish a “white space” network in rural Patrick County, VA, says both the public and private sectors must play a role in tacking the issue. Neither has the resources or the incentive on its own to address the many nuances of a complex issue with challenges that differ depending on community. “Private (companies) and government should work together for the people,” Hayden said. “My philosophy is to improve one community at a time through technology. This means to me that one should determine what is the best way to serve their area. All are different with varying resources.”

Conclusion: a new kind of divide emerges While it is impossible to predict for sure whether the ongoing trend towards universal Internet access will continue or grind to a halt in the coming years, data from the last 50 years regarding information technology indicates that this growth will continue. There is already tremendous resources and human capital committed to building the necessary telecommunications infrastructure for global Internet access in the most remote areas. Technology is making it easier than ever to do so, and the work of non-profit organizations and government entities indicate that this is a cause that transcends commercial motives. It has become widely recognized that a secure, high-speed Internet connection is essential, and that fuels projects for universal access as a humanitarian cause for social good, with ultimate goals that are in everyone’s best interest. Combine that energy with the ever-growing commercial market for mobile devices — a market 17


stimulated by advances in wireless technology that make it cheaper to expand a network’s reach — and you have a potent force for spreading Internet access to even the most remote corners of the globe. With so much momentum in favor of Internet connectivity for all, it appears a goal we will see achieved in our lifetimes. But this achievement, while a laudable first step in equal opportunities through technology, will not by itself bridge the digital divide. The crucial variable, and the one for which there is the most uncertainty, is that of education, and whether adequate resources will be allocated to cultivating expertise in utilizing the Internet. This will take an integrated approach in public schools to mesh computers with the curriculum. In nations without well-established school systems, non-profit entities will have to fill the instructional void and provide guidance as those who have never operated a computer learn for the first time how to tap into the Internet’s enormous potential for bettering their lives. With so much attention and resources placed solely on the connectivity issue of the digital divide, there as of now exists little initiative to bridge the educational divide. The energy and momentum towards connectivity might transfer over to education once universal access is reached, but this is by no means a sure thing, and even in a best case scenario guarantees that a digital divide will exist well into the future unless a massive investment in technology training and instruction takes place during the next decade. The digital divide of the future, therefore, will be considerably different in nature than the digital divide that exists today. People will no longer be automatically handicapped based on where they live, how much they earn, or what ethnic group they belong to with regards to technology. The gap will instead be one of knowledge, as those with expertise and training in technology will be able to reap its benefits from anywhere in the world while those without the means to learn will find the technology far from helpful on a practical level. Each new development in the Internet will be enjoyed and leveraged by only a select group of users, the same group that will shape the new content and be catered to for new product development. This makes it more imperative than ever that school systems, charitable projects, and government programs aimed at the less fortunate all place a heavy focus on training in computers and technology, with special emphasis placed on how the Internet can be practically applied to everyday tasks. Otherwise only a select segment of the population will leverage their high speed access into more than just a distraction, and the digital divide will carry on in a new form with inequalities as pronounced as its original incarnation.

18


19


Bibliography Foundational research Riggins, Frederick J. and Dewan, Sanjeev (2005) "The Digital Divide: Current and Future Research Directions," Journal of the Association for Information Systems: Vol. 6: Iss. 12, Article 13. This article examines research regarding who has access to technology and inequalities in the ability to use the technology among those who do have access. The analysis is completed at the individual, organizational and global level. The article ends with suggestions of questions for additional analysis on the topic, making it a good overview of previous work that can serve as a platform for future research. Office of the Deputy Prime Minister, Londay, UK (2005). "Inclusion Through Innovation: Tackling Social Exclusion Through New Technologies. A Social Exclusion Unit Final Report." This government report explores how digital inclusion can be used to make public services such as education, health and employment more accessible for socially excluded groups by utilizing the Internet and new technologies. It highlights the considerable benefits of a society not marred by a digital divide, with the potential to deliver services more efficiently and effectively. Fors, Michael (2003). "Closing the Digital Divide: What the United Nations Can Do." UN Chronicle, No. 4, 2003, page 31. This article, written for the World Summit on the Information Society, discusses ways the United Nations and its members can narrow the digital divide, focusing on the factors of infrastructure, security, diplomacy and public/private partnerships. The article highlights the power of technology for developing nations to boost economies, making a strong case on the importance of closing the digital divide for international prosperity. It also demonstrates the myriad of political and cultural factors that influence the current and future size of the divide. GuillĂŠn, Mauro F.; SuĂĄrez, Sandra L. (2005) "Explaining the Global Digital Divide: Economic, Political and Sociological Drivers of Cross-National Internet Use." Social Forces. Vol. 84, No. 2, Dec. 2005, pages 681-708. This paper argues that the cross-national differences in Internet use are the result of the economic, regulatory, and sociopolitical characteristics of countries and 20


their evolution over time. It predicts that Internet use will increase with worldsystem status, privatization and competition in the telecommunications sector, democracy and cosmopolitanism. It draws upon data from 118 countries as evidence in support of its hypotheses, with an optimistic view that the eventual bridging of the digital divide with boost the global economy and spread democracy across the globe.

Warschauer, Mark (2003). "Demystifying the Digital Divide" Scientific American. Vol. 289, Issue 2, Aug. 2003. The paper argues that the key issue of the digital divide is not so much unequal access to computers as it is inequality in how computers are used. It points out in the disparity of high-speed internet access along socioeconomic lines and highlights the range of disparities in different countries. This provides an overview to frame analysis on the digital divide and debate on the best possible solutions.

Kraemer, Kenneth L.; Ganley, Dale; and Dewan, Sanjeev (2005). "Across the Digital Divide: A Cross-Country Multi-Technology Analysis of the Determinants of IT Penetration," Journal of the Association for Information Systems: Vol. 6: Iss. 12, Article 10. This paper studies the level of digital divide among 40 countries from 1985-2001, based on data from three distinct generations of IT: mainframes, personal computers and the Internet. It contains an empirical investigation of socioeconomic factors driving the digital divide. It reaches the conclusion that factors that previously may have been expanding the divide with earlier technologies are narrowing the gap as Internet penetration grows. This provides a good outlook at the future trajectory of the digital divide given expanding Internet access, with a lot of solid historical data to serve as a foundation.

Chinn, Menzie D.; Fairlie, Robert W. (2004). "The Determinants of the Global Digital Divide: A Cross-Country Analysis of Computer and Internet Penetration." Economic Growth Center, Yake University, discussion paper, No. 881 This paper identifies the causes of cross-country disparities in Internet and personal computer use by examining 161 countries over a three-year period, looking at a variety of economic and demographic variables. It also evaluated infrastructure capabilities for each country. The findings demonstrate what factors are most crucial in bridging the digital divide, concluding that public investment in human capital, telecommunications infrastructure and regulatory infrastructure can all mitigate that technological gap.

Forward-looking research 21


Valadez, James R. Durán, Richard P. (2007). "Redefining the Digital Divide: Beyond Access to Computers and the Internet”. The High School Journal. Feb/March 2007, pages 31-44 This study looks at the digital divide in relation to high and low resource schools in the U.S. It examines the disparities in those schools as indicative of the larger disparities in the U.S, focusing on the usage of computers rather than just access. It found that high resource schools had teachers utilizing more creative ways to incorporate the Internet into classrooms. The findings provide support for a broader definition of the digital divide that includes the social and academic impact of different ways the Internet is predominantly utilized by youth.

Jones, Steve; Johnson-Yale, Camille; Millermaier, Sarah; Pérez, Francisco Seoane (2009). "U.S. College Students' Internet Use: Race, Gender and Digital Divides." Journal of Computer-Mediated Communication, Vol 14(2), Jan, 2009. pp. 244264. This paper presents the results of a study on the impact race and gender had in Internet use among U.S. college students. The study conducted a survey of college students at 40 institutions of higher education. It’s results show a strong point of contrast based on race, although not gender, with regards to Internet use,. The paper compares those findings to a survey of the general U.S. population. Barzilai-Nahon, Karine (2006). "Gaps and Bits: Conceptualizing Measurements for Digital Divides." The Information Society, Oct. 2006. This paper criticizes policymakers who rely on simplistic measures for the digital divide related to general Internet access, instead proposing that a more thoughtful analysis and comprehensive data is needed on how the Internet is being used by different families. It bases its analysis on the argument that networks and new technologies are not neutral artifacts but also political and social spaces. Jackson, Linda A.; Yong Zhao; Kolenic III, Anthony; Fitzgerald, Hiram E.; Harold, Rena; Von Eye, Alexander (2008). "Race, Gender and Information Technology Use: The New Digital Divide." CyberPsychology & Behavior. Vol. 11, No. 4, 2008. This paper presents research examining race and gender differences in the intensity and nature of Internet use to determine whether it predicted academic performance. It provides plenty of data on usage rates of IT among different ethnic groups, showing a gap between whites and minorities. Its findings are that length of time using computers and the Internet was a positive predictor of academic performance, indicating the importance of the digital divide as a socioeconomic issue. Ono, Hiroshi; Zavodny, Madeline (2008) "Immigrants, English Ability and the Digital Divide." Social Forces, Vol. 86, No. 4, June 2008, pages 1455-1479.

22


This study examines the extent and causes of inequalities in information technology ownership between natives and immigrants in the United States, with a particular focus on the role of English ability. The results show a significant gap in IT access and use between natives and immigrants, with language spoken at home a key factor. Given the rise in U.S. immigration, the adoption of technologies by children of non-natives will play a key factor in the depth of the digital divide going forward, so this study provides a firm background on why such families now languish on the low end of the divide. Horrigan, John. “Wireless Internet Use,� Pew Internet & American Life Project, July 22, 2009. http://www.pewinternet.org/Reports/2009/12-Wireless-Internet-Use.aspx. This study looks at how many Americans are utilizing wireless devices to access the Internet. These mobile devices, including smart phones, laptops and game consoles, have shown remarkable growth in recent years and are seen as away to bridge the digital divide, both because of their affordability and in the widespread broadband networks that have sprouted up to service them.

Recent news articles Tucker, Patrick (2007). "A New Ruler for the Digital Divide." The Futurist, March-April 2007, page 16. This article proposes a new method for measuring computer literary based on how individuals utilize the Internet rather than just have access to the web. It shows statistics that demonstrate the narrowing gap between those who actively use the Internet and those who don't, but the article also shows that there remains a large gap in who has access to high-speed Internet connections at home and not just at work or in the classroom. This is a key measuring stick, the article argues, since such home access is needed in order to enjoy the full benefits of the Internet and fully participate in shaping content online.

McDonald, Alyssa (2009). "The pioneering designer of the first cheap laptop for the developing world, she is determined to close the digital divide." The New Statesman, May 4, 2009, page 36-37. This article profiles Mary Lou Jepsen, the chief technology officer of the nonprofit organization One Laptop Per Child. The article details her efforts to design the cheapest, least power-hungry laptop every produced. Her innovations have spurred advances in the computer industry that have led to a number of netbooks now becoming commonplace among mainstream markets. The article shows how advances in technology will play a crucial role in bridging the digital divide by 23


reducing the cost barrier for accessing the Internet. Fox, Susaannah; Jones, Sydney (2009). "Generations Online 2009.� Pew Internet & American Life Project, Jan 28, 2009. http://www.pewinternet.org/Reports/2009/Generations-Online-in-2009.aspx This study focuses on the generational gaps that exist with respect to Internet and computer use. The research looks not only overall use by age group, but also what tasks the age groups utilize the Internet to accomplish. The study shows that there continues to be a gap, with fewer older Internet users, but that this divide is narrowing, with younger generations not necessarily dominating every facet of the web.

Stross, Randall (2009). "Broadband Now! So Why Don’t Some Use It." New York Times, 10/17/2009. This article takes data from various studies on Internet use across the globe to examine the limits of broadband penetration. It lays out the challenges of universal broadband access, questioning whether such a goal is even achievable given that some will always resist the Internet. The article also provides a nice framework of the initiatives underway on a government level to expand broadband access in the near future.

Holahan, Catherine (2007). "America's Digital Divide Narrows." Business Week Online, 3/15/2007. This article takes a look at Latinos and other minorities who are gaining Internet access, but still missing out on the full range of benefits for being online. It cites a study showing a small disparity in Internet access between whites and minorities. That positive development is offset by data showing that minorities are not using the web in an interactive way, which is referred to in the article as the "digital dimmer switch," between those who use the Internet to create and shape content and those who only use it to access information. This article builds off previous research on immigration and the digital divide and points the way forward on how the next generation of immigrants will or won't utilize new technologies. Blumenstein, Lynn (2009). "Pew Highlights Digital Divide on Mobile Devices." Library Journal, May 15, 2009, page 17. This study, culling data from a 2007 survey, focuses on how the digital divide relates to the ownership and use of mobile devices. It touches upon the various uses of mobile platforms as a delivery route for information from public institutions. Its findings show that a majority of adults are still uncomfortable in their use of mobile devices, indicating a powerful differentiator among technology users. Mallinson, Keith (2009). "Digital Dividend Bounty Can Close Digital Divide," Wireless

24


Week, March, 2009, page 20. This analysis looks at the commercial potential of extending mobile broadband networks to rural areas thanks to innovations in technology making such coverage affordable. The article looks at the economic costs and benefits of the private sector versus the public sector leading the way towards providing this access. It also looks at the regulatory issues and how they impact the extension of broadband services, providing an overview of making those services commercially viable and profitable.

Clarke, Alan; Milner, Helen; Killer, Terry; Dixon, Genny (2008). "Bridging the digital divide." Adults Learning, Nov. 2008, pages 20-22. This article focuses on the wide-ranging impact digital inclusion has on different age groups. It emphasizes the importance of technical skills and media literacy in order to capitalize on Internet access. The article discusses some of the challenges and opportunities with this issue, discussing the gender gaps in technological skills and the potential for improved lives through a universal understanding of living and working in the digital world.

Richter, M. J. (2008). "Rural Ops Bridge the Digital Divide." Telephony, Sept. 2008, page 14-16. This article focuses on the establishment of broadband networks in rural areas and small towns and the various efforts being undertaken to bridge the digital divide through this access. The article looks primarily at small Internet operating companies and their new incentives to extend services in sparsely populated areas. The article has examples of small towns possessing the same resources with regards to access as their big city counterparts, a crucial first step in bridging the digital divide.

Cauley, Leslie (2009). "Internet speeds vary across USA, leaving a digital divide." USA Today, Aug. 25, 2009, money section. This article presents the findings of a Communications Workers of America report showing the disparity in download speeds across the U.S. It presents dramatic contrasts between different states in broadband access, and contrasts those figures with other developed nations. It also analyzes the effect these download speeds can have on different uses for the Internet. This data is a key factor in the digital divide, with many rural Americans unable to enjoy the full benefits of Internet access because of slow connections.

Chee, Foo Yun (2009). "EU seeks to close digital divide with broadband aid." Reuters News Service. Sept. 17, 2009.

25


This article reports on the EU's efforts to leverage public funds into private investment for high-speed broadband networks. The article focuses on the importance of a public/private partnership and outlines the rules by which that arrangement could operate in order to achieve a goal of 100 percent broadband coverage by next year.

Cauley, Leslie (2009). "Rural Americans long to be linked." USA Today, June 8, 2009, technology section. This article looks at the $7.2 billion included the federal economic stimulus package set aside specifically for increasing broadband access and the effect it could have on bridging the digital divide in rural areas. It provides information on the cost barriers associated with expanding access to areas of low population density, along with the economic and educational setbacks that result. It highlights the key technological needs in these communities and how they might be met by expanded high speed Internet access. Naone, Erica (2009). "Firs ‘white space network launched." Technology Review, Oct. 22 2009. This article examines the first wireless network in the U.S. making use of ‘white space,” the unused fragments of the frequency spectrum used for broadcast television. With the conversion of analog television into digital this year, many such frequencies are available, but they are tightly regulated by the Federal Communications Commission. For the new network, the FCC granted an experimental license to Patrick County, VA, a rural area with previously no Internet access. The positive results could set a precedent for similar networks across the nation.

Interview transcripts Hayden, Roger (2009). Chairman of the Patrick County Broadband Task Force. This interview talks to the man who spearheaded the creation of the first “white space” wireless network, a model cited by many technology publications as offering great potential for other rural areas. Hayden talks about the need for public/private partnerships in making such initiatives happen. He also stresses the absolute necessity of giving rural residents access to high speed Internet to prevent them from being handicapped economically and educationally.

Deamons, Nick (2009). Web Developer for Enguage, www.enguage.com. This interview discusses digital literacy with a web developer for Enguage, a national marketing firm that specializes in multi-media content and relies upon widespread access and use of the Internet to spread its campaigns. Deamons 26


speaks about the divide that exists between the technology haves and have-nots and suggests that bridging this gap could become a social imperative undertaken by the public sector. Johnson, Esther (2009). Champions of digital literacy. http://www.certiport.com/portal/desktopdefault.aspx?page=common/pagelibrary/c dlp.htm This site contains profiles of the Champions of Digital Literacy, an honor sponsored by the Certiport corporation that celebrates the achievements of individuals whose commitment and efforts have brought about the adoption of digital literacy standards in schools, organizations and communities. Among the honorees for 2009 in Ester Johnson, and a video interview done in conjunction with the award allows her to expound upon the necessity of education and training in digital literacy. Johnson has played a leading role in implementing such training programs in her role as national director of the federal Job Corps program.

Bowan, Wally (2009). "Bridging the Rural Digital Divide: FCC Starts Work on National Broadband Strategy." Democracy Now, April 8, 2009. This transcript of an audio interview looks at the Federal Communication Commission's national strategy for bringing broadband access to the Internet into every American home. It interviews the executive director of a nonprofit Internet service provider on the challenges of offering broadband access to rural areas and the role of nonprofits in carrying out the FCC's goal. It examines which projects will receive priority government funding and what are the ways to measure success. Fox, Susaannah. (2009). "A Public Interest Internet Agenda.” Pew Internet & American Life Project, Jan 28, 2009. http://www.pewinternet.org/Presentations/2009/35-OneWebDay.aspx This transcript is taken from a panel discussion in Washington D.C. held as part of One Web Day. The discussion focused on the future of Internet use, and Fox, Pew’s Associate Director for Digital Strategy, presents data from her organization breaking down the percentages of Americans with different levels of access. She places special emphasis on the use of mobile devices and their potential to be game changers in the way the Internet is designed and utilized.

Negroponte, Nicholas (2007). “One Laptop per Child, Two Years on.” Technology, Entertainment, Design (TED) conference. The driving force behind One Laptop per Child talks about his organization’s works delivering more than a million low cost, low energy laptops to some of the most remote places in the world. Negroponte talks extensively about the design of 27


the devices, along with their potential to engage children and boost digital literacy even in areas with very little educational infrastructure.

Information-rich websites One Laptop Per Child (2009). “A low-cost, connected laptop for the world’s children’s education.” http://laptop.org/en/index.shtml This website provides an overview of One Laptop Per Child, a non profit organization with a mission to provide a low-cost, low-power laptop to each child, allowing for self-empowered learning. It details the organization’s progress in bridging the digital divide in rural areas around the world, with information on the positive impact of increasing Internet access and teaching the tools to utilize new technology. It also includes details on the latest technological innovations that make such a mission possible, with updates on the organization's latest efforts.

CNET Networks (2009). Bridge the Digital Divide. http://www.bridgethedigitaldivide.com/ This website is a clearing-house of information for efforts across the world aimed at closing the digital divide. Its information includes recent news, ways to become involved in the effort, arguments on the importance of bridging the divide, and facts on the issue. It also includes a number of links to various organizations working towards bridging the divide, many of which I will reach out to for interviews on the progress of that mission and how recent technological developments aid in the goal.

Division of Governmental Studies and Services at Washington State University (2009). Digital inclusion. http://dgss.wsu.edu/di/ This website is the home of Washington State University's Center to Bridge the Digital Divide. It includes information on recent projects to boost social and civic participation through digital networks, illustrating the potential impact of digital inclusion in rural areas. It also provides links to a variety of studies that explore the future of the digital divide, with results from the accompanying research of those projects. Food and Agriculture Organization of the United Nations (2006). Bridging the rural digital divide. http://www.fao.org/rdd/ This website provides an overview of the efforts to bridge the rural digital divide as undertaken by the United Nations and other international governments and

28


organizations. In addition to providing links to updated information and news on various projects, it contains examples of government policies for addressing the issue. It also details case studies of how the approaches detailed in the site have been implemented across the world, providing a strong picture of how the issue of the digital divide can be tackled. Internet World Stats: Uses and Populations Statistics (2009). http://www.internetworldstats.com/stats.htm This website provides current and past statistics on the number of Internet users across the globe and sorted by continent, as assembled by the Miniwatts Marketing Group. The information comes from data published by Nielsen Online, the International Telecommunications Union, regulators and other reliable sources. The statistics also include IT penetration and user growth by continent, offering an excellent snapshot at how many people across the globe are tapping into the Internet and what are the current trends on this topic.

29


Brook Corwin Com530 Analysis of “Past and Future: An Interactive Media Chronology”

While the Internet is now rapidly globalizing society and changing our system of value from material-based to information-based, changes predicted by Nicholas Negroponte of MIT in 1995, it has taken decades of development and innovation to reach its current level of widespread connectivity. The concept and ideas behind the Internet go back to the 19th century, with a number of theories proposed during the early 20th century building the intellectual framework. Early resources for creating the technology were provided mostly by the U.S. military, which during and after World War II heavily financed computer development. While the first computers were massive with thousands of tubes, the number of transistors per chip doubled every 18 months, a pattern known as Moore’s Law. From this framework of hardware, the Internet evolved from the Advanced Research Projects Agency (ARPA), initiated by President Dwight Eisenhower in 1957. Development of the first networks followed in the 1960s, with different models showing centralized, decentralized and distributed networks. In order to share research information, ARPA developed a basic network connecting UCLA, Stanford University, UC-Santa Barbara and University of Utah. The network later expanded to 12 locations across the country during the next few years. Over time a global network of networks under constant development by scientists and engineers would constitute the modern-day Internet. Communications between networks accelerated greatly with the creation of FileTransfer Protocol (FTP) in 1972, which allowed computers to share files. This led to email quickly becoming the most common use of the Internet as researchers shared information. Speed of delivery increased with the development of Ethernet in 1976. Development of the Graphic Interchange Format allowed full-color images along with text to be efficiently transferred through networks. While the size and capabilities of the Internet grew among the research community in the 1980s, it didn’t start to exhibit potential to the general public until 1990, when Tim Berners-Lee first wrote html source code. This formed the basis of what would be known as the World Wide Web, with businesses called Internet Service Providers giving people access to go online for personal use. Easy navigation of the web became a reality in 1993 with the launch of Mark Andreessen’s Mosaic web browser. Two years later, the White House launched its first webpage and the first commercial VoIP was released allowing for Internet-based telecommunications. A year after that, Hotmail became the first major web-based email provider while ICQ pioneered real-time messaging online. The web became a use for the Internet rapidly embraced by the general public. The number of Internet users skyrocketed from 45 million people in 1996 to 407 million in 2000. There are now around 1.6 billion people with online access, making the growth of Internet usage far more rapid than television or radio. As Internet access and use become widespread, a number of sites and programs broadened its potential commercial applications. In the late 1990s, online retailing became a big industry in large part because of the popularity of new start-ups Amazon 30


and eBay. The ways people connected through the Internet also grew rapidly. The development of Napster in 1999 allowed widespread file sharing to a degree that permanently altered the course of the music industry, while the advent of blogs around the same time changed the way people compiled and shared information. Video sharing among mass audiences would enter the picture in 2005, as YouTube became a nearimmediate cultural sensation sold for $1.6 billion to Google after less than 18 months in operation. By this point, the online world extended into nearly every sector of society, with a wave of legal challenges regarding intellectual property and computer hacking forcing courts to take a hard look at how to regulate and monitor online activity without trampling individual rights. Fueling much of the growth in the Internet’s permeation into society were both technological innovators and commercial stakeholders hoping to turn a profit. This follows the same pattern as the development of other communication technologies such as the telegraph, radio and television. In all these cases, often young innovators had their ideas bankrolled by older entrepreneurs, with both groups setting a framework of use for new technology that would be molded by a new rule structure devised by governing bodies. Many faced strong skepticism and outright ridicule on the practical use of the new technologies before the commercial potential was widely recognized. This pattern began in the 19th century with the development of the telegraph by Samuel Morse, who also created the code named after him to communicate using the device. He initially encountered resistance from the U.S. government on financing wires to transmit the telegraph’s electronic signals. But by the middle of the century, tens of thousands of wires were operated by Western Union, with the first transcontinental line built in 1861. This changed politics and commerce from being segmented into isolated geographical areas, with the pace of communications taking its first major leap since the invention of the printing press centuries ago. The radio was initially envisioned as a wireless telegraph by inventor Guglielmo Marconi. Like Morse with the telegraph, Marconi was one of a number of inventors working on the device, but he was the first to secure a patent and get the financial backing to put it to the mass market. Radio was initially developed for military use in the U.S. and faced much early ridicule with regards to public use. But after World War I it realized its considerable commercial potential through the Radio Corporation of America as a source of news and entertainment. By 1922 there were 576 licensed radio broadcasters, with music, comedy and dramatic programs dominating the airwaves. Radio became a key source of news for the general public during World War II, when major political figures used the medium to share information and influence public opinion. The invention of the basic telephone dates back to the middle of the 19th century, but it would take decades before the first U.S. patent for the device was awarded to Alexander Graham Bell in 1876. It went to market a couple of years later, and by the end of the century there were nearly 600,000 phones in Bell’s system. With American Telegraph and Telephone Company (AT&T) dominating the industry, telephone use exploded during the 20th century, with 5.8 million phones by 1910. A transcontinental telephone line began operating in 1915. Throughout the century AT&T held a near monopoly on the industry, prompting new forms of regulations and eventually the forced break-up of the company in 1984. A number of concerns regarding privacy of

31


conversation also were voiced. Others championed the device for decentralizing authority and opening up new ways to do business and connect people. In many ways all of these predictions were proven true by the end of the century, when 175 million subscriber lines were active and the first digital cellular networks began to come online. Cell phone use caught on rapidly, and now replaces landlines for most U.S. customers. The concept of television was postulated by writers hundreds of years before its invention. It took decades of development to bring one to market, with the key breakthroughs coming in the early 20th century. Philo Farnsworth is credited with making one of the most influential breakthroughs in 1927 at the age of 21, when he developed the first working electronic camera tube. That led to the first fully electronic TV, with Radio Corporation of America copying Farnsworth’s invention in 1933. Television became a household fixture shortly after World War II, and by 1960 there were 45.7 million homes that had one. Once again early skeptics were proven wrong, as a lightly regulated private sector relative to other nations enabled U.S. companies to steer the device toward commercial profitability. With these precedents in new communications technology in place, the history of the Internet follows a familiar pattern. After decades of research and a progression of innovations, its commercial use was realized and then expanded into millions of households, changing the way society governs, does business and communicates. In doing so, it erased certain physical barriers, forced the creation of new regulations and influenced much of western culture. As the Internet grew in prominence and capability, a number of technology experts theorized how it could continue to shape society in the future. Many postulated that it would change the concepts of government, property and community. Others honed in on particular industries, predicting the death of books, recorded music or television. Some went as far as claiming that the Internet would eventually lead to the demise of organized government, big corporations or perhaps the human race itself as machines grew smarter and more capable to operate independent of human control. Such theories of massive change hinge on the concept of the Internet as a naturally evolving organism that will eventually create a collective consciousness not tied to physical space. As information technology is crossed with biotechnology and nanotechnology, tiny devices of invisible molecular structure would build upon themselves until they enveloped all of society, with humans assimilated into the world of machines. Futurists examining the possibilities of the Internet already envision the concept of Web 3.0. While the initial web was a mostly one-way stream of information through static pages, Web 2.0 allowed users to interact with the content and with each other through forms of social media that bridged the gap between physical and virtual personas. Web 3.0 is often seen as a more seamless blurring of what is online and what is off, with technology functioning in much the same way as a human being. Interfaces for navigating the Internet would thus resemble real-life navigation, approaching a point where online information is constantly surrounding us and making it difficult to distinguish from organic material. This evolution of the web has integrated designers with computer programmers in the creation of new technology in order to better replicate a pleasant human experience when utilizing the Internet. Mitch Kapor advocated this idea in his “Software Design Manifesto� published in 1990, helping spawn a whole industry of interactive design.

32


Virtual reality already has a sizable presence in modern-day society through video games, which have grown into a multi-billion dollar industry. Some of these games, most notably those on Nintendo’s Wii system, have an interface that interprets players’ body movements. Others are based in an online world, with users engaged in pursuits and cooperative tactics mirrored in physical society. Such games, known as MMORPGs, are driven as much by the users as the technology, with an entire social infrastructure created within the virtual world. As they become easier to use, such games may match the huge popularity of social networking websites such as Facebook. Researches are finding that many who spend the most time networking online use the interface to help them reach a point of self-actualization, a psychological concept describing the pinnacle of our desire to become complete human beings. Growth in the virtual world would dramatically change the way society does business, and many corporations have been experimenting with virtual spaces for marketing products or training employees. As communicating with avatars becomes less cumbersome and better replicates real life interaction, personal identity and social class will undergo upheaval. New technology is making it possible for virtual worlds to more effectively integrate with the physical world. Wireless broadband access, along with handheld devices that can access a network, has become widespread and affordable for vast segments of the population, allowing them to plug into the virtual world wherever they go. These devices are expected to evolve into “wearable computing,” which would more seamlessly integrate information from the Internet into our daily lives through a combination of sensors and projectors. Everyday objects from appliances to umbrellas to paper could be equipped with the capability to tap into the Internet and operate using online information. The end result would be a heightened reality in which online data is placed on top of the physical world as we go about our everyday lives. The Web 4.0 era is viewed as an “always-on” world in which it becomes impossible to differentiate between physical and virtual reality. The human-computer interface in this era would mirror interaction with a real person, with speech and motion recognition advancing well beyond their current limitations. The interaction would be two-ways, with users feeling a tactile sensation, termed “haptics,” to go along with audio and video. This interface could reach a point of directly connecting computers to human brains, something now being researched, so mental thoughts alone could be read by a machine. The computer itself might be powered by human body heat. A hyperconnected world would function very differently from one in which online and offline are still segmented. Already there are millions of people using a variety of Internet-connected devices for personal and professional communications, with little if any boundary between the two given that information for both is always available. This has created new forms of social etiquette and workplace rules to keep the personal from impeding professional work, and vice-versa. It has also impacted performance as people try to juggle multiple devices and streams of information, with decades of research showing a decline in effectiveness when more than one task is attempted at the same time. Those addicted to online gaming find it difficult to disconnect and function in the real world, while those over-engaged in work continue to access email, send messages and complete professional tasks during their off-hours. In this new world, attention has become a highly valued commodity. There are, however, a number of upsides to this interconnected world. Some social ties are strengthened as online activity replaces the

33


static activity of watching television. Experiences and information can now be shared online in a way some neurobiologists say carries therapeutic value. This degree of interconnectivity will only increase, to the point where all information is public and constantly accumulating. Long after a person dies, their information and even their memories might continue to exist for others to access. The progression into a world of an “always-onâ€? Internet will take place through a number of new developments during the coming decades. Within a few years, computer technology is expected to be widely used to enhance food and fabrics. Other items that could undertake this integration by 2014 include jewelry, medicines, toys and home dĂŠcor. Research on such far-flung concepts as teleportation and human cloning could take place in the near future, with the first developments coming in the form of transporting small objects or creating prosthetic body parts that contain organic material. Within a decade, robots could be a ubiquitous part of society, taking a number of forms in daily life. Virtual worlds will be commonplace by then for business and recreational purposes, while television will begin to take on a three dimensional form. Space travel is viewed as viable within a few decades, allowing for travelers to enter a state of hibernation so they do not age on long journeys. The creation of new colonies in space would result. It is at this point that artificial intelligence will have reached a point where computers might revolt against the human race or perhaps integrate humans into a new society where people only exist in cyberspace. Preparing for the many uncertainties of the future requires a new way of thinking that constantly evaluates the likelihood of new possibilities. The challenges include not only identifying the latest trends on the horizon, but parsing out which are most likely, and also determining ahead of time the best way businesses and organizations can respond to those trends. Research plays a heavy role in this, with the causes of each trend broken down into driving factors, key figures and possible implications. Counter-trends must also be established to determine which future scenarios are most plausible, and each of the many consequences of a trend must be evaluated with relation to how it affects a particular corporation or organization. Doing this kind of research requires a diligent and methodical approach that does not immediately dismiss or embrace any new idea. It is important when evaluating future trends not to make any assumptions, with outcomes impossible to predict with certainty given the rapid advance of the Internet and technology. Instead it takes an integrated approach of constant evaluation of possible outcomes to determine which would have the greatest impact and are the most plausible. Initial assessments will have to be followed up with new analysis as a trend develops. Much of the challenge in futures planning involves motivating and inspiring others in the organization to use the same degree of foresight and not resist changes that are designed to anticipate future outcomes. Whenever a futures trend is recognized and assessed, new analysis and planning will have to take place to prepare a response, and that requires active buy-in from others within the organization. Thus it is imperative to recognize the stakeholders for any particular futures outcome and engage them up front on the need to change and anticipate the probable trends. Many will likely resist the change at first and deny the information or its potential impact. Many are fearful of the change. But if a few naturally engaged adventurers within the organization are active in the effort early on, most others will follow along with that momentum, eventually dragging along the segment most resistant to anything new.

34


Accurately assessing and selecting trends takes a broad, committed and continuous look upon the horizon beyond what is published in mainstream media outlets. It is important to recognize that trends don’t exist in isolation and are impacted by a variety of economic, technological, political and demographic factors that drive the change. Many trends will be countered by competing trends, while others may reveal a very low probability while maintaining their high potential impact. Each trend must be ranked for the scope of its impact both now and at later dates to determine which will have the greatest importance. This type of a disciplined evaluation will determine which trends require immediate action to anticipate and which only merit a close watch as they continue to develop. Over time trends will need to be reexamined for the latest development, with their rankings of importance possibly altered as a result. Each should have best and worst-case scenarios communicated to key stakeholders so they take it seriously and engage with a program to plan for each possible outcome. Putting the plan into a comprehensive brief can achieve this communications objective. Each brief should categorize all the pertinent information to demonstrate a trend’s source, direction and relevance. It should be thoroughly referenced, edited and reviewed in order to convey credibility, paving the way for the implementation of action plans that tackle the challenges posed by each trend. The rapid pace of new developments in the Internet takes an understanding and willingness to accept complexity. With so much uncertainty and possibility on the horizon, the thinking process must be stretched beyond what may seem plausible in the current day and time. It also requires separating the vast amounts of information now available into credible and non-credible sources, being cautious to avoid data that is easily countered. The era of Web 3.0 and Web 4.0 will put previously unimagined amounts of data and connectivity within the grasp of everyday life, and those able to recognize the patterns in that data and leverage the knowledge into effective strategies will capitalize on the technological growth. It’s the same principal in large part that determined which individuals and organizations profited most from 19th and 20th century innovations in communications. The main difference now is that the pace of those innovations is accelerating at an unprecedented level, and only an equally swift response to anticipating the future will enable individuals to maximize its potential.

35


Brook Corwin Com530 Analysis of “An Introduction to Interactive Media Theory�

The term "interactivity" has a wide range of meanings to researchers and scholars, with no single authoritative definition. But many theories explore its meaning and offer ways of explaining what is or isn't interactive. Figure 1.1 shows some of the most prominent researchers and their conclusions on decoding interactivity. While these theories vary, many propose different levels of interactivity, with the level of control and flexibility for the user ranging in magnitude.

Varying viewpoints also are found in defining interactive design, which many recognize as an essential component of interactive content in order for it to attract an 36


audience. The term was first proposed in the late 1980s by Bill Verplank and Bill Moggridge and since then interaction designers have applied theories from many fields. Psychology has played an especially crucial role, since such design must anticipate user behavior and how it relates to products and interactive content. Figure 1.2 shows the six major steps in interaction design and some of the tools used during each. An effective interaction designer will have to apply these steps, sometimes in multiple iterations.

Interaction design has two key aspects — social and affective. While the social aspect of design is focused around the dynamics of interpersonal communication, the affective aspect relates to the emotional response of the user. Don Norman proposed the emotional design model, which recognizes the way humans equate aesthetically pleasing 37


things with intrinsic quality in a product, person or place. While designers rarely code or program content, they do need a wide range of skills related to research, communication and creativity in order to succeed. The skills needed to create a quality piece of interaction design are presented in figure 1.3.

In an effort to analytically explain what goes into interactive design, many theories on interactivity have been studied and endorsed through the years. These explanations are carefully researched though a combination of quantitative research that employs mathematical methods of data collection and qualitative research that

38


investigates patterns and relationships. In 1999, Robert Craig proposed a categorization of seven "traditions" of these communication theories: rhetorical, semiotic, phenomenological, cybernetic, sociopsychological and sociocultural. The most fundamental aspects of communication were broken down in 1948 by two different models from two different scholars: Harold Lasswell and Claude Shannon. Figure 2.1 details their models, which inspired more efficient design in communications systems and formed the basis of Information Theory.

Shannon's model was refined and presented by Warren Weaver, who introduced the concept of encoding and decoding taking place at both sides of a communication in order for an effective transmission. He also elaborated on the concept of noise, a naturally occurring factor in communication that can be offset with more redundancy in the message. Information Theory stipulates the establishment of communication networks, which can be studied in terms of traffic, closure and congruence. Activity Theory is linked to philosophical writings from centuries ago by the likes of Kant, Hegel, Marx and Engels, assessing the developmental processes by which humans are shaped and shape experiences through their actions. The theory was applied

39


to human-computer interaction in the 1990s to assess the influences that shape how interactive tools are created and then accepted or rejected. This theory emphasizes the importance of involving individuals in the act of design by showing how knowledge and human artifacts are refined in a repeating loop of interaction and assessment. Sociology is the discipline behind Symbolic Interactionism, an approach developed out of the work of George Herbert Mead and Charles Cooley. The term itself was coined by Herbert Blumber, who set out the premise that humans ascribe meanings to things derived from social interaction. This is now being applied to interactions mediated by computers between individuals or online communities. Another theory arising out of sociology is Social Network Theory, which describes the sets of relationships between members of social systems. It is now being used to assess online social networks and the ties created within them. Such networks, because they illustrate the multitude of connections to any one individual, demonstrate the small-world phenomenon first proposed in the 1950s by Ithiel de Sola Pool. The idea was built upon in future decades to explain small-world networks created by a small number of random links bringing order to an otherwise disconnected group. These networks, which may or may not be based on human relationships, are shown to be present in all aspects of society and growing in prominence with the Internet. Related to this path of study is the Online Communities Theory, which focuses on the virtual groups that communicate only through online mediums and digital forms rather than face-to-face. The social dynamics of such communities were studied by Peter Kollock, who in 1998 presented the motivations for altruistic behavior in virtual groups. They are: • Anticipated reciprocity • Increased recognition • Sense of efficacy • Sense of community Kollock's research was expanded upon by other scholars such as Marc Smith, who in 1992 concluded that people in general are social beings naturally motivated to receive responses and feedbacks to what they contribute. Other scholars have looked into the motivations behind the many who belong to online communities but do not actively participate. The Uses and Gratifications Theory looks into reasons why people engage with particular media, identifying how people are motivated by personal needs in choosing their communications tools. It ties into the concept of self-actualization, which is placed at the top of Abraham Maslow's hierarchy of needs. The theory has been applied to different forms of media technology through the decades, with the earliest research being applied to radio. It has shown trends across different forms of media in terms of which user needs are met. Figure 3.1 summarizes the five categories of needs, which are important for interactive communicators to understand as they reach out to people with their work.

40


As the number of communications options rapidly expands, many more choices are now available for users to have their personal needs met, and these choices can vary from individual to individual. Ha and James listed five dimensions of interactivity — playfulness, choice, connectedness, information collection and reciprocal communication — to describe how users behave and make choices on the web. This form of study is common among modern-day researchers of Uses and Gratifications Theory and is done at the individual, small group, organizational, societal and cultural levels. This can lead to categorizations of various online media tools by their uses and needs, which is graphically demonstrated in The Conversation Prism. The Knowledge Gap Theory proposes that the gap between the information-rich and information-poor widens with each new communications medium. First stated by Tichenor, Donohue and Olien in 1970, the theory postulates that those with higher socioeconomic status tend to acquire information at a faster rate and can thus adapt to new technologies. This theory suggests that communicators must use different mediums to convey the same message to different audiences in order to transcend this gap, which has become known as the "digital divide." Researchers often disagree on whether social factors drive technological advances or whether it's the other way around. This debate, pitting the theory of social construction against technological determinism, revolves around whether specific mediums alter perceptions and thinking. The Diffusion of Innovations Theory studies how an innovation is known and applied throughout society. Everett Rogers’ research in the current decade studies what

41


key factors influence the adoption of particular innovations. Individuals go through a mental process in weighing these factors that has five stages: knowledge, persuasion, decision, implementation and confirmation. Rogers also divided people into five categories based on their rate of adoption, with innovators being the first ones eager to try new ideas, followed by early adopters, then the early majority, the late majority and finally the laggards. The Spiral of Silence Theory is built around the idea that people's decisions to express their opinions are influenced by how those opinions will be viewed by others. Developed by Elisabeth Noelle-Neumann, the theory states that a person will not express an opinion if he or she believes it is out of favor because of fear of social isolation. The media play a large role in defining what is perceived as majority opinion, and this majority only grows as more and more people with opposing views stay silent to avoid hostile reactions to their views. The Powerful Effects Theory is connected to the Spiral of Silence Theory, with a focus on changing public behavior. Research in this field suggests that campaign objects must be spelled out clearly, with relevant themes reinforced through multiple layers of media. The Agenda-Setting and Media Framing theories place the mass media in a prominent role of defining the topics the general public will discuss and also providing the symbols and context that drive those discussions. Under this theory, the media frames every event around a central organizing idea and in turn establishes how the general public will process that event. Latter-day research of this theory examines how the agenda set by the media could be established first through other sources, perhaps a different form of media created online. Social networking sites such as YouTube, blogs and Twitter may be driving the media's agenda through its own framing of issues shaped by individual users. The Perception Theory examines how individuals interpret messages. That interpretation is broken down into sense of physical stimuli and psychological factors. Research done by Vidmar and Rokeach in 1974 established the concept of selective perception, in which people react to the same message in different ways based upon their prior experiences and previously held views. In order to uphold their existing views and knowledge —known collectively as a schema — individuals will often be selective in their attention or retention of information that conflicts with those views. Research done by Graber suggests that people use their schemas in order to guard against information overload by using simplified mental models. Image-perception Theory, an idea established by Linda Scott, focuses on the process of interpreting images. Scott identified three ways of thinking about pictures in mass media, as transparent representations of reality, as conveyers of emotional appeal and as complex combinations of symbols. Propaganda Theory came about in the wake of World Wars I and II to examine how governments and political figures influenced human action through manipulation. The term was first defined by Harold Lasswell and then distinguished by Roger Brown from persuasion. Research on the techniques of propaganda is summarized in figure 3.2. Subsequent research has shown that different techniques are most affective depending on the audience itself.

42


Persuasion Theories are a related field of study looking at what factors change attitudes, eventually concluding that a single mass communication message is unlikely to result in significant change. It examines different techniques used, including fear, humor, visuals, sexual appeals and repetition. Research done by Daniel Katz stresses the importance of understanding the functions served by particular attitudes, with efforts to 43


change these attitudes likely to backfire if they don't recognize the psychological need met by the old set of views. The Media Richness theory categorizes different types of communication by their effectiveness. It reaches the conclusion that more personal means of communication are the most effective way of sharing messages, with face-to-face meetings at the top of this list. The human action cycle is a model developed by Don Norman to categorize the steps individuals take to achieve certain goals. Those steps include both cognitive tasks such as forming the goal and physical tasks such as executing an action sequence. Media's form and uses, along with the multitude of theories developed to study its effects, have evolved with new technological and social changes over the past four decades. The following timelines — figures 4.1, 4.2, 4.3 and 4.4 — highlight some of the key technological developments in each decade.

44


45


46


47


A number of key media researchers and implementers are on the cutting edge of what forms media and communication will take as the Internet evolves. Many actively study or publish analysis on how audiences perceive and are influenced by new forms of communications created by new technology. As the number of Internet users explodes, especially in rapidly developing countries such as India, Brazil and China, a new framework of the web has evolved with more interaction and content shaped by individual users. This evolution, termed Web 2.0, gives users a number of controllable mechanisms to create emergent outcomes such as: • Most interesting content becomes visible • Personalized recommendations • Meaningful communities

48


Relevant content easily found Enhanced usability Collective intelligence The Web 2.0 landscape is filled with programs and applications built around these emergent outcomes. Some are all about sharing content or establishing social networks, while others function as a way to filter information and make recommendations. The most popular and successful of these programs are built around the characteristics and principles of user experience. Some of the characteristics can be applied in narrow or broad scopes depending on the user needs. They are listed as follows: • Informational • Actionable • Social • Personal • Scoped • Learnable • Configurable • Adaptive • Playful • Impartial The principles of the user experience define how each program is driven as an interactive vehicle. They include a high level of connectivity and a controllable interface as primary factors. Also included as key principles are the relevancy of the experience, its ease of comprehension and its aesthetic appeal. The effective application of these principles goes back to the intensive study of interaction design, with the numerous theories created to understand communications used to develop ways to effectively inform, persuade and connect through new mediums made possible by technological innovations. • • •

49


Brook Corwin Com530

Analysis of Interactive Audiences In the digital age, media is mobile and can be shared with thousands or even millions of people after its initial creation. This has created a new spreadable model where content is repurposed and transformed in a way that adds value, localizing it for diverse contexts of use. This model is often described with the term "viral," but that concept is vague and only loosely defines the many ways media is spread through social networks, guerilla marketing and mobilized consumers. During this process, media isn't just replicated, as the term "viral" often implies, but instead reshaped and distorted as it passes from hand to hand, a process that has accelerated with new developments in technology. The spreadable model puts the power with consumers, which Grant McCracken calls "multipliers," since they can add value to the original content. The term "viral media" has been defined and redefined by many people, tracing back to Douglas Rushkoff's 1994 book Media Virus. Rushkoff directly equated media messages as viruses, injected into our consciousness through a "protein shell" that can take many forms including: • Event • Technology • System of thought • Musical riff • Visual image • Scientific theory • Sex scandal • Clothing style • Pop hero The virus then carries messages constituted by the term "memes," a term conceived by evolutionary biologist Richard Dawkins in 1976 as a cultural version of the gene. According to Dawkins, memes are driven to self-create and possesses three important characteristics, all illustrated below.

50


Fidelity

Fecundity

•  Memes have the ability to retain their informational content as they pass from mind to mind.

•  Memes possess the power to induce copies of themselves.

Longevity •  Memes that survive longer have a better chance of being copied.

Virally spread events have a meme at the center, a self-replicating idea that moves from person-to-person and duplicates itself as it goes. Decades after the concept of a meme was developed by Dawkins, it was frequently used to describe the rapid spread of videos and comical photos on the Internet. Some view this spreadable media in the metaphor of a "snack," served in small, tasty portions that are frequently passed around but carrying little long-term value. The problem with using the original concepts of memes and media viruses is that both models severely diminish the role of human agency in shaping content. This has led to a redefinition of the meme concept into something that takes into account remixing and appropriation. A great example of this new concept is the "LOLcats" Internet meme. It is an idea, with its own visual and written language, that has spread into spin-off sites and photos built around the original ideas of showing cats in humorous positions. The value in the meme isn't the LOLcat idea itself as much as the fact that it can be repurposed and reused for new meaning. A similar value can be found behind the song "Crank Dat" by rapper Soulja Boy, which became one of the most popular songs in 2007 on the strength of "mash up" videos made by a wide variety of people and spread online. Different audiences created their own dance steps, lyrics, themes and images in their videos built around the original idea of the song, turning a previously unknown rapper into a top recording artist with a record contract. In many ways, spreadable media is an attractive concept for marketers and advertisers since it broadens the reach of a commercial message. But there is also a risk that the message will be repurposed by consumers into a negative interpretation. This has led to a transition in the relationship between consumers and producers, shifting away from a push-based model of the broadcast era and one-way communications. The new consumer/producer relationship is built around interconnected networks, what Tim O'Reilly identifies as "the architecture of participation." Viral marketing is inherently social, and media companies rely on consumer engagement, a far cry from the old model of duplicating content. The spreadability model forces marketers to look at

51


what properties make content more likely to be spread and how their company can benefit. They must also pay closer attention to consumer's motivations in order to devise content that better aligns with their interests. Consumers themselves must be redefined, as McCracken suggests they be called "multipliers" to reflect their power in expanding the meanings of any message and inserting a range of unpredicted contexts of use. The concept of stickiness has emerged to define the old model of focusing on information that grabs and holds the attention of visitors without regard to their participation. This concept relies not so much on participation but on drawing in new audiences who in turn create more traffic by encouraging friends to visit as well. Examples are Amazon or eBay, which encourage users to link their homepages or blogs back to Amazon. The concept of stickiness has many core distinctions from spreadability, and they are summarized in the graphic on the next page.

52


Stickiness

•  attracts and holds attention of site's viewers •  depends on concentrating the attention of all interested parties on specific site •  depends on creating a unified consumer exeperience •  depends on pre-structured interactivity to shape viewer experiences •  tracks the migration of individual consumers •  sales force markets to consumers •  finte number of channels for communicating with consumers •  producers, marketers and consumers are sepereate with distinct roles

Spreadability

•  seeks to motivate and facilitate the efforts of fans and enthusiasts •  seeks to expand consumer awareness •  depends on creating a diversified experience •  relies on open-ended participation •  maps the flow of ideas through social networks •  grassroots intermediaries become advocates for brands •  relies on consumers to circulate the content within their own communities •  depends on increased colaboration across and even a blurring of roles •  uses an infinite number of localized networks

Many sites struggle to balance between the two competing models as marketers work to unlearn the lessons of stickiness and embrace spreadability. More attention is needed on social networks, since they define the relationship between producers and consumers that determine whether content is circulated in a way beneficial or detrimental to established brands. Online communities are key points of this equation. Rather than build communities around their own products, marketers must reach out to existing communities with their own values and aspirations that must be addressed. This is a tricky process that cannot be forced. Companies need to figure out what existing 53


communities most need and identify how their products can be used to address that need. They must also understand the language, culture and rules of that community, burying their short-term interest in favor of long-term relations. Further complicating the process are parallel economic orders among consumer communities and producers. Both sides have not only different economic interests, but also different motives, judgments about value, and views on social obligations. For example, consumers feel obligated to share music they like with friends and family, while producers see this act as selfish and immoral since large-scale file-sharing is economically damaging to their industry. There is a code of gift giving and reciprocity online, described by Howard Rheingold as less of a tit-for-tat exchange of value but rather as part of a larger reputation system where contributions are recognized and respected. Social networking websites such as Facebook exemplify this, allowing users to exchange gifts and information Success online for companies comes from building up good will that can later on be converted into economic transactions through other channels. By their very nature, gifts are not given out of economic motivation, with the transaction made to establish a relationship rather than to trade commodities for wealth. This leads to fluid social relations, with status, prestige and esteem taking the place of cash as the primary driving force behind the interpersonal transactions. This is very different from the commodity culture, where each thing exchanged has a monetary worth and long-term social obligations are not built as a result. Spreadable media movies between these two worlds, with products in the commodity culture being circulated and repurposed in the gift giving culture. Users add new meaning to the original content before they share it with members of their community. In this new environment for media, the term "audience" is in many ways outdated becomes it implies a passive role on the part of the consumer. Many other terms have been suggested as an alternative. These include: • loyals • media-actives • prosumers • inspirational consumers • connectors • influencers It is difficult to settle on a single term because each member of the audience may assume a different role and different type of relationship with a company. People in different roles behave in different manners, and many have different roles for different communities. Someone may be creating content in one community, or might be critiquing content in another or just "lurking" in a third. Communities themselves may have a very different hierarchal structure in their members. Three of these community structures, as defined by Lara Lee, are summarized in the figure on the next page.

54


•  Organized through individual social connections, so the ties with each member are stronger and they operate in a decentralized manner.

Hubs

Webs

Pools

•  Here people have loose associations with each other, but a strong association with a common endeavor or with the values of the community

•  Individuals form loose social associations around a central figure as in the case of fan clubs. They work when there is a clear connection between the brand's values and the personality of this central figure

Communities may also have different barriers to entry. Some are open, while others require free or paid registration. Open communities often have the weakest social ties that are quick to dissolve, while those requiring payment operate under the model of longterm membership. Free registration strikes a balance between the two and is the most common way of implementing an online community. With so many different kinds of communities, companies must be available in many different ways in order to meet the needs of all their users. The one-size-fits-all model no longer applies. Determining what content will be spread requires a complete understanding of an online community's social relations, for they are what determines value and worth for a piece of media. This relates to the old concept of word-of-mouth, where members of a community share information to bolster camaraderie and set the parameters of their group. When advertising spreads, it is often because the community feels the content says something that relates to and strengthens their relationships with other members. In a different community, however, the advertising may have little or no value. Content will only spread when it serves the particular needs of that community and becomes part of the gift economy, a process that cannot be artificially undertaken. The term "producerly" has been used as a way to evaluate the potential spreadibility of content. A producerly video has enough loose ends and possible interpretations that it 55


can be enjoyed on a variety of levels, with the ambiguity creating gaps where viewers can insert their own cultural norms and experiences. Repurposing the content becomes not only a way to build social ties, but also an act of self-expression. This requires media creators to relinquish control of their brands in order for them to find meaning in different communities. Anytime they try to reign in grassroots creativity, their message will lose worth among consumers. While the content must have enough loose ends to be reinterpreted, its base idea must still have entertainment value in order to create new worth. Humor is key in this regard, and most videos that have been widely spread either contain parody, absurdity, or shock/surprise. One example of this is the Cadbury "Gorilla" ad, which blends all of these traits to create a piece of video that is both funny and producerly. Its appeal works within different levels of engagement, entertaining both on its surface and on the deeper level of spoofing older content known only to certain segments of the audience. One way to create ambiguity in a message is to force viewers to fill in the gaps of authenticity or truth within the message. This happened to a video of a Ford Mondeo, which consisted of a series of still and near-still shots of cars being lifted off the streets while attached to bunches of helium balloons. It took six months, but the ad eventually became an online sensation after a "homemade" version of the ad surfaced, leaving people wondering how the feat was pulled off and whether it was physically possible without digital effects. Further speculation centered around whether the "homemade" video was created with help from Ford in order to generate online buzz. Another way to create gaps in content is to leave it unfinished, as Burger King did with its Subservient Chicken interactive video website launched in 2004. Visitors were able to give commands to the chicken to determine its actions, giving the audience a tangible participation in the content's creation. In this particular instance, the audience didn't know how many commands were recognized by the chicken, and the site built interest from the search to test the limits of its capabilities. Nostalgia is an important concept in a gift economy, and much of the content spread has a nostalgic tone since it connects various members of a community around shared experiences. This has led to long abandoned brands being reincarnated into new products, a term defined as "retromarketing." Solidarity is an important quality in these types of products, as they must inspire a sense of belonging. While the spreadable content model appears to be the growing trend, new examples, new business plans and new policies are being announced each day. The verdict is still out on the spreadable model's benefits and risks for businesses during this period of flux, but at least for now corporations are taking a bigger risk by resisting spreadability than by embracing its considerable potential to do the following. • Help expand and intensify consumer awareness of new and emerging brands. • Expand the range of potential markets for a brand by introducing it to new markets. • Intensify consumer loyalty by increasing emotional attachment to the brand or media franchise. • Expand the shelf life of existing media content by creating new ways of interacting with it or reshaping the market for a dormant brand • Reach audiences with a lower promotional budget • Reach niche markets • Generating buzz and awareness for a product

56


Beneficial or not, it's impossible to turn the clock back on spreadability. New applications and software allowing the sharing of files and information are being developed. A recent study shows that between 25 and 30 percent of active technology users regularly write comments, share music and access social networking sites. A significant portion of the public is embracing this model, resulting in what is termed as "the groundswell" by authors Charlene Li and Josh Bernoff. The transition may be painful, especially for companies with well-established brand messages who are worried about loss of control over their intellectual property and have reason to fear backlash from consumers. But in the end there is far more to be gained than lost. Understanding your audience during the interactive age requires selecting important people to follow and then ingesting as much information as possible from those sources. It also requires listening to actively involved consumers, who can be categorizing by the following levels of participation listed below from least involved to most involved. • Read • Favorite • Tag • Comment • Subscribe • Share • Network • Write • Re-factor • Collaborate • Moderate • Lead From all these types of participation comes collaborative intelligence that can be leveraged for beneficial purposes. But it's important to recognize that different people use the Internet for different things, as shown by the intent index. This index categorizes the most common needs the Internet gratifies for users. With these statistics as a baseline, you can gather more information on your audience from those who visit your site with the process of data collection. From this a general profile of your audience can be constructed. Among the points of data that can be analyzed includes: • visitor loyalty, bound rate, recency and time on site. • visitor location • visitor search terms/keywords • traffic source • visitor polls and surveys • audience feedback • unique visitors • page views per session Google Analytics can provide much of this information. From all this data companies can develop personas of their customer base, then communicate these through visual models that show their daily routines and when they might utilize various products. Information visualization is a growingly popular way to analyze this information and apply it to a company's marketing plans. These can take the form of charts or creative

57


graphics. Configuring sites for search engine optimization with Google or Yahoo is justifiably an important consideration for companies. But real-time searches have also grown in importance. Sites like Twitter, Collecta and One Riot all utilize real-time search engines to give users the most current information. This may be why many U.S. marketers are looking to boost their spending to produce online content. A recent survey shows that that 67 percent of U.S. marketers plan to focus their online budgets on video, with social media finishing next with 42 percent. The videos created with the extra billions of dollars being committed are shared through digital word of mouth by sending to family and friends via email or a video sharing site. The majority of all Americans, according to a recent survey, have watched an online video clip in the past month, and most teenagers have also shared a video clip. A quick scan at the most popular websites measured by Nielsen, shown below, demonstrates the importance of social networking, sharing of information and search engine optimization. 1. Google 2. Yahoo! 3. MSN/Windows Live/Bing 4. Microsoft 5. AOL Media Network 6. YouTube 7. Facebook 8. Fox Interactive Media 9. Apple 10. Wikipedia The user experience is crucial in determining whether consumers frequent a site. This can be measured thorough usability research, selecting participants in a confidential test of their emotional responses to a site's interface. The experience has always been crucial in sales, going back to the "event" of buying a new record at the record store. Designing a site with the user experience on the target audience in mind will make it much more effective in generating sales. To do this, a prototype should be developed and then tested with real users, with an ongoing evaluation of the site after it has launched. Focus groups can also be employed, through these have limitations as far as accurate responses that reflect how the audience actually uses the site. Other research methods include eye tracking and heat maps to measure how users interact with the interface of the site. With the rise in web applications, there are numerous ways to share information or reach consumers. This makes it critical to apply principles of interaction design in order to stand out from the crowd of applications. An effective interface conveys both anticipation and autonomy to the user, with status mechanisms to keep users aware and informed. Information should be kept up to date and in easy view, with a consistent design throughout the site. Time is a valuable commodity, and a website that wastes the user's time will make for unappealing experiences and few repeat views. The interface should be explorable with well-marked roads and landmarks, although it provides multiple paths to achieve the same goal. Users should feel they have a way out of any section of a site, but the interface should be designed so they will stay involved and

58


choose not to leave. If a site conveys a particularly strong or weak user experience, it may be given a tag conveying this information. Sites such as Del.icio.us make bookmarking easy and allow consumers to share their perspectives on sites via tags. The purpose of tagging falls at the intersection of three established fields, listed below, according to Gene Smith 1. Information architecture: organizing information so others can find it. 2. Social software: computer mediated collaboration and sharing. 3. Personal information management: organizing information to get things done. Tags are appealing because they are simple and flexible. They can be added to and aggregated, helping Internet users make sense of the wealth of pages online. With effective use of tags, the right information can be found and evaluated, mobilizing social engagement. In order for users to find your site, however, it will need to be optimized for recognition by search engines. This involves tagging the site properly using keywords relative to the content, tweaking titles and descriptions so it is indexed appropriately. Sites must also be accessible to those with disabilities, with ALT tags and image maps to help the visually impaired. Succeeding as an interactive professional means taking all the information and data available and leveraging it into a smartly designed site that takes the user's experience into account. In this environment, companies are rewarded for listening to their consumers, and a professional would be wise to keep feedback channels open and not be afraid of the audience taking an active role in reshaping content. At the same time, he or she must shepherd the creation of original content that is engaging or funny, meeting the many niches that can be served through the networking powers of the web. There are so many tools that can be used for this objective, and so many ways to reach established communities. With the right product, a company can find a community that will see value in its message. But first it must court that community by embracing its rules and conventions, putting economic self-interest on the back burner in exchange for cultivating a long-term relationship. Value of content is perceived differently by different audiences, and thus maximizing value for your products is finding the right audiences and delivering the content exactly when and where your audience wants it, put in the form where they are most likely to respond and participate.

59


Best of my blog

Push marketing has survived every new development in media over the past century, the Internet included. Instead of just interrupting your reading, ads started interrupting your listening, then your viewing, and now your web browsing. But one thing has changed. Now you can push back. Two-way conversations and relationship-building dialogue were the underlying themes of all three fantastic research presentations I heard recently on web marketing. The details differ, but the overall mantra is the same: talk to your customer base and also also listen what they have to say. My fellow classmates sum up this philosophy much better than I can, and I highly recommend the online versions of their presentations. You can find David Hollander’s presentation here, Cathy Freeman’s here and David Parsons’ here. 60


It all makes the process of talking with consumers seem so engaging, so uplifting, so affirming. Now try it as the voice behind college football’s loathed Bowl Championship Series, the entity standing in the way of the playoff system so many fans passionately want. It isn’t pretty, as the BCS’ brand new Twitter account, INSIDEtheBCS, demonstrates. As it touts the benefits of the bowl system and the flaws of a playoff format (they even created a website dedicated to bashing playoffs), the feed is clearly meant to convince some fans that having polls decide who plays for the national championship isn’t such a bad idea. What’s happening instead is that the BCS’ many enemies have a place online to rally. Try searching insidetheBCS on Twitter and you’ll come across the barrage of negative comments lobbed against an institution most fans feel is standing in the way of fairly crowning a national champion. But does that make the attempt a failure? Whoever is manning the BCS account has taken the time to respond to many of the negative posts since the feed started a couple of weeks ago. This is exactly what marketers are supposed to do with social media, as criticism comes with the territory. It’s in addressing the criticism and winning over new converts that social media marketing has its value, and the jury is still out on whether the BCS will win in this regard. Having the BCS actively respond to proponents of a playoff is much more endearing than conference commissioners arrogantly proclaiming on network TV that the current system must stand. But it’s doubtful the BCS cares at all what its Twitter followers have to say. Despite heavy media and fan pressure to do so, BCS officials have shown zero interest in a new format. Unless they’re taking input into account for possible changes to the bowl system, then this is a social media effort that’s all talk. If you’re going to stick with the traditional push marketing tactics, there’s not much use for new media. That’s a forum best saved for those eager and willing to act on their audience’s input.

61


As kids we identified far-off countries by flags. As teens we picked them out based on geographic shape and location.

62


But as adults, our identifying image for a country overseas could soon become its homepage. We already instinctively seek official websites for companies, organizations and individuals. Nations surely aren’t far behind. So what are they showing us? It’s quite a mixed bag, and the results are interesting enough that they were the basis of some really insightful research by one of my best professors this semester. They’re also fodder for some much-deserved criticism by web designers. This immensely entertaining blog post got me thinking about the topic, as it lines up government websites from around the world and scrutinizes the flaws of each. There are some really puzzling examples. Why is it always fall in Cypress? Must people be blurry in Greece? Do the French really think red and purple make a good color scheme? But there’s more to analyze here than just aesthetic design. Approach the sites from a public relations standpoint, and you can see how different nations have very different goals for their web portals. Nations like Poland, Denmark, Israel, and Singapore have appealing sites that seem aimed at attracting new visitors and outside investment. On the other end of the spectrum, you have sites where informing native residents appears the primary goal. This can be done with style like in Belgium and Australia or with visual clutter like in Mexico or Cameroon. You could even be like the United Kingdom and come up with a color scheme and design that has no visual connection to your actual country. There’s also something to be said for simplicity. It’s easy to criticize nations like Ireland, Thailand and South Africa for their bare-bones design. But keep in mind that these nations have large rural populations. A simple site may be uninteresting, but at least it properly loads on old browsers or dial-up connections. Argentina even found a way to make simple look stylish. What’s most important is that the site have a public relations purpose, whether that’s attracting outside attention or informing the taxpayers. Effective design only comes about when the nation is clear on this goal.

63


Otherwise you can turn up some pretty ghastly results even in relatively wealthy countries. For all their oil money, the governments of Saudi Arabia and Russia still can’t seem to buy a website that doesn’t make them look like third-world nations in cyberspace.

The next time some company starts fretting that an interactive approach to communications puts the integrity of their brand at risk, tell them about Mickey Mouse. It’s hard to find a more carefully managed corporate brand that Disney, and no symbol is more entwined with the company than its most popular cartoon character. Through the decades Mickey has served as both a corporate logo and a company spokesperson, a squeaky clean character whose image and look remained untouched. That’s about to change. Mickey is about to change. And video game enthusiasts can make the changes. As part of an ambitious plan to overhaul Mickey’s image and make him more relevant in today’s pop culture, Disney has signed off on the development of a new game for the Nintendo Wii where players have tremendous freedom in the actions and ethics of the protagonist. The game

64


takes place in a word of retired and forgotten Disney cartoon characters, a realm that Mickey inadvertently stumbles upon and damages. There are some elements of a traditional action game, but the real focus is on interactivity with the surroundings. Using the Wii remote, players paint and undo the structures and fabric of Mickey’s physical environment. This offers extraordinary control not just over the game’s content and direction, but on the personality of its iconic protagonist. “The core of this game is the idea of choice and consequence, and how that defines both the character and the player,” designer Warren Spector said during the game’s roll-out last month. “By putting the mischievous Mickey in an unfamiliar place and asking him to make choices — to help other cartoon characters or choose his own path — the game forces players to deal with the consequences of their actions. Ultimately, players must ask themselves, `What kind of hero am I?` Each player will come up with a different answer.” Sounds like Mickey will be getting a makeover. It really shouldn’t come as too much of a surprise that Disney is embracing user choice and control that are the hallmarks of interactive media. For over a decade they’ve had an interactive theme park. They’ve even embraced the idea of streaming their movies and television programs online. But what’s really notable here is the trust Disney has placed in opening up a once untouchable icon to reinterpretation by users. That kind of trust is mandatory for companies utilizing the new media landscape. But the payoff of having consumers actively shape and engage with your logo holds enormous potential for future profits. If Mickey Mouse can shoulder that risk, then no brand is off-limits.

65


When it comes to new media, I’m often just ahead of the curve (joined Facebook in late 2004, Twitter in early 2008). But when it comes to social bookmarking of the likes facilitated by Delicious, I never grasped the purpose. My loss. Having just now discovered Delicious and started putting it to good use, the web has suddenly become a more manageable, easier-to-navigate place. The bottomless pit of information online, hard to wrap by brain around at first, is given order by the self-made categories created through the abundant use of tags. Let’s back up for those unfamiliar with the program. It’s a web-based tool that like Twitter is deceptively simple. You sign up for a free account and download an application to embed into your web browser. Everytime you visit a site, you click the icon on your browser and enter a few tags (one word descriptions of the content). Then save. That’s it. What comes next is where Delicious proves its worth. My classmate Linda Misiura has already given a ringing endorsement more eloquent and enthusiastic than I can muster. So instead I’ll give you a few reasons why bookmarking pays practical dividends.

66


1. It preserves the best of the web: We all come across hundreds of websites we enjoy over the course of the year, 90 percent of which we’ll never be able to find again. But tag those sites with Delicious, and you can recall them in seconds to enjoy or share. 2. It keeps us from overlooking something useful: Links providing information related to our jobs or hobbies often pile up on busy days. Instead of ignoring them because there’s no time, we can quickly tag them instead. If their subject matter is needed for a task at a later date, we’ll know where to find them. 3. It’s a search alternative to Google: Using Delicious isn’t a solo act. The site is storing the tags of others, and searching those tags is a great way to find new sites on an obscure topic. You can even subscribe to a tag and get regular updates every time it’s applied. 4. It’s a more efficient way to share information: Do you have friends that clog up your Twitter or Facebook feeds with endless steams of links. What if they all used Delicious instead? The program has a social networking component, so you can see what your friends are tagging on the subjects you want to learn about. 5. It expands the way we seek knowledge online: It doesn’t take much tagging before it becomes a habit. At that point, you’re not just reading websites, you’re categorizing them. It’s one thing to consume content, it’s quite another to recognize key themes and unique perspectives within a broader discussion. Tagging is a wonderful exercise to build the mental muscles needed for the job. Want to learn more? Check out my Delicious page and click on the sites I’ve tagged with “delicious.” Each offers a distinct take on how tags make navigating the web all the more gratifying.

67


A year ago, I never expected dialogue with my favorite reporters. Now it’s expected. The rules all changed once traditional media outlets began embracing social media platforms like Twitter and Facebook. Some resisted, thinking the “comments sections” after every story sufficed as feedback. But the smart ones realized that you can’t have meaningful exchanges in virtual spaces where everyone is anonymous and most are emotionally overheated. Attach a name and face to the comments, however, and the civility and level of intelligence goes way up. Actually respond to the negative posts, and what emerges is a general sense of camaraderie between reader and journalist even in disagreement. That’s what makes Facebook fan pages, when done right, such great communities. Look for New York Times or Washington Post and you not only get lists of stories but an honest dialogue on each. It works even better when you can break up your readership into segments, like Slate does with fanpages for each of its podcasts. These podcasts themselves feature top Slate reporters and drive traffic to the main site. So each fan page serves as a fun mini-community for political junkies, sports nuts or culture mavens. Post to the page, and you often have a host of the podcast respond to you directly. The pages got me hooked on Slate’s podcasts and overall news site as well, since I have an open invitation to reach out and comment to the creative forces behind the content. This approach isn’t limited to news. It works well for just about any company looking to actively bond with its customer base (and really, what company isn’t?). The corporate world is beginning to catch on. A recent survey by PR Week of 271 marketers found that 63 percent use social media for their companies. Facebook emerged as the most popular tool, as

68


“connecting with customers” was the most common social media goal marketers listed as “very important.” In interviewing these early adopters of the trend, a theme emerges of representing the company honestly and openly. Blatant sales pitches (or worse, sales pitches disguised as user-generated content) are highly frowned upon. The marketing executives who have had success in boosting their brand through social media did so by having productive exchanges with their customers, responding to feedback and taking it into account. It’s not always pretty. The company on Facebook will hear a lot more negative comments than the one hiding behind a static website. But those comments will get said regardless. Only through social media can the company not only hear them but also respond, often solving the problem and building a long-term relationship at the same time.

69


Anyone still remember the dark days of dial-up? Back then logging onto the Internet meant a wait of 30 seconds plus, and every new website gave you enough time to grab a drink or use the bathroom while it loaded. We might think this level of (dis)connectivity is behind us for good. Most U.S. residents now live in areas where “high speed” access is available, often from multiple providers. But just because something qualifies as “high speed” doesn’t mean it’s fast. Average download speeds vary tremendously across the country. In some cases it’s because there’s simply not a good network in place. Other times it’s because part of the state has very low population density and installing broadband is considered too expensive for private Internet service providers.

70


So where does your state stack up? The Communications Workers of America has tabulated download speeds from hundreds of thousands of tests. I took their data and made the map above to illustrate what regions have the fastest Internet access. Check out their site yourself for more detailed information, including breakdowns by county. This isn’t just an issue of convenience. A relatively slow connection (regardless if it’s called “high speed”) dictates what users are able to do on the Internet. With U.S. speeds as a whole much faster than say, five years ago, it’s now common for websites to have embedded video, audio, highresolution photos and animations — all of which take a long time to load. If your connection is slower than the rest of the country, you’re effectively segregated in the tasks and services you can accomplish online. In other words, there’s still a form of dial-up in spirit if you’re network is slow. Only this time around, others aren’t waiting. They’re getting things done at work and at home while you’re just trying to upload a basic file. Fortunately there are a number of developments — some driven by profit, some by charity — that are bringing more residents into the fast lane. I discuss them in greater detail in the digital divide research paper I’ve just completed. But at least for now those with a speedy Internet connection should recognize that it’s something for which to feel fortunate.

If there’s any governing mantra to the mishmash of advertising campaigns floating about the Internet these days it’s this — don’t make your ad look, sound or feel like an ad. Easier said than done. Consumers are highly attuned to commercial persuasion and are always developing new defense mechanisms against the bombardment of ads assaulting their senses every day. So marketers instead 71


make commercials entertaining, or they sneak product placement into scripted movies/television shows, or perhaps they just package the whole message as if it’s legitimate news. This third tactic is perhaps the most effective, and disturbing. Many Americans, at least among older generations, were raised to trust that mainstream news sources are objective with regards to commercial interests and will only praise a product if it meets exacting standards. So when they see print ads made to look like newspaper articles or TV commercials made to look like news broadcasts, significant credibility is fraudulently conveyed. Maybe that’s the thinking behind the Federal Trade Commission’s newest regulations on bloggers touting a product. The guidelines stipulate that bloggers must disclose all ties to a company they write about, all the way down to any free samples they receive in order to review the item. Among certain demographics, blogs are now trusted sources of information. If a blogger is getting paid by a company in order to garner more favorable copy, the F.T.C. reasons, readers have a right to know. It’s a fair point, but one open to scrutiny for its double standard. Both bloggers and the Interactive Advertising Bureau have lashed out at the regulations only targeting blogs and not traditional media outlets. A blogger faces a possible $11,000 fine if he fails to disclose that a record label sent him a free CD that he reviewed. But if a music critic in a newspaper or magazine doesn’t make the same disclosure, there’s no punishment. And these critics get free goodies all the time. The regulations also open up a slippery slope of potential new restrictions. How are Tweets and Facebook postings, with severe constraints on content length, supposed to disclose biased reviews? And what about traditional media outlets that do glowing feature stories on a prominent advertiser? Why are they left off the hook for such highly deceptive behavior? It’s the F.T.C.’s job to ensure truth in advertising. Sharpening the line between independent content and paid advertising is an important part of that mission and worthy of some new regulations. The problem comes in singling out bloggers as the only ones engaging in the shady practice. There’s plenty of culprits to go around. In the end it will take a more savvy consumer and some more practical legal guidelines to unmask all these disguised ads. 72


Web 2.0 is often championed, and derided, for its power to reinvent or repackage yourself into a more appealing persona online. Our Facebook profiles, Twitter feeds, and video game avatars are carefully managed collections of our real world identities. We post information all the time, but usually only the things that project the personal image and experiences we want the world to see. But what if the age of “always-on” Internet took this trend of semi-fictional identities back in the other direction? What if it actively prevented the act of inflating our real selves, or put a halt to the time-honored tradition of spinning dull memories into lively stories? This would be the result if the ever-growing trend of lifecasting reaches it’s logical conclusion — cameras documenting and storing every moment of your life. As we learned in class today, this possibility is not far off with regards to technology. It will soon be affordable to own enough data storage space to house every conservation in a lifetime. Wearable devices with tiny cameras should also be available to the general public within the next few years, making it possible to record and then archive every single moment. Imagine how this would fundamentally warp the concept of memory. Last week my class visited a living museum to find physical symbols linking to our abstract memories. This week a different class teaches us that this whole process could be obsolete. Instead of unlocking past experiences in the deep corners of our minds, we just type in a time and date and watch the recorded footage word-for-word. An informal poll of the class indicated a strong aversion for this method. Why would we want to dull our personal history by recording it in all its mundane detail? Why would we want to ever rewatch those experiences?

73


But already people record and share every bit of info they can on their children. When talking about our own youth, we often share a longing to relive a wonderful memory of a special moment, or go back and appreciate something we took for granted the first time around. Recording an entire life holds that potential, and it will be interesting to see how popular this option becomes once the technology makes it possible. With those recordings comes the security that no part of life will ever fade or be forgotten. But the interpretive quality of memory that makes it so romantic a concept in the first place would be unequivocally lost in the process

Did my blog on lifecasting peak your excitement (or fears)? You’re not alone. Many of my classmates also found the topic intriguing, posting their own views and info onto the blogosphere. Karen Hartshorn views life-recording technology as inevitable, but also worries for lack of privacy. Loss of privacy is also on the mind of Kenya Ford, who worries about the repercussions of having every moment tracked. Brynne Tuggle points out some of the positives, such as improved organization and better health care with having every moment stored on a hard drive. Megan Lee also states some benefits (and touches upon the drawbacks) of what would become a completely transparent lifestyle. But there’s a major drawback both David Kennedy and Matt Brown elaborate upon. That’s the loss of storytelling and imagination in retelling our lives. Given the choice between our abstract memories and cold, factual accounts of every moment, which would we really prefer?

74


With circulation for print media outlets plummeting along with ad revenues, ideas abound on how to “save” the industry. * Charge for content * Go hyperlocal * Cut staff * Blog more * Pretend you “get” Twitter Here’s the latest idea: replace ink with pixels. This week Entertainment Weekly is debuting a new chip that embeds video into a page of the print edition. It’s an ad for CBS’ new fall lineup, with around 40 minutes of video clips on upcoming shows, kicked off with a comical intro from the stars of The Big Bang Theory. Right now the cost of the chip is keeping the ad only running in major markets, but the technology is widely available and could very well pop up in other magazines in the near future. The instant association (well, at least for Harry Potter fans) is the moving pictures of the fictional Daily Prophet, a seemingly magical version of the newspaper that also mirrors what the movie Minority Report envisioned as the future of “print” journalism. We’re not that far off from such a possibility, with innovations rapidly developing with e-ink that can transpose digital images onto screens that have the size and flexibility of paper. If you’re a fan of Esquire Magazine, you’re probably already familiar with this technology, as that publication used e-ink for the cover of its 75th anniversary issue last year. All these developments have a “wow” factor at first and attract immediate attention. But beyond the novelty, whether video in print succeeds depends

75


in large part on how closely it will mirror the website experience. In the case of a video chip touting CBS’s shows, the end result is a low-quality version of something that can be just as easily accessed at a number of web sites. Why watch standard video in a magazine when we all have multiple options for high quality video at our fingertips. The e-ink developments have more long-term potential in that they provide multimedia content while maintaining the thin, foldable format that is the one advantage print media now has going for it over websites. Then again, as smart phones become more advanced and wi-fi networks ubiquitous, online multimedia is almost as portable as a rolled up magazine. We may have to change our entire definition of “print,” because there will soon be no reason to consume news and entertainment through old fashioned ink on paper. But if we can go digital with a material that’s just as easy to stuff into your carry-on bag, perhaps what we know as “print” can live on in the age of interactive media.

76


Separating speech from dialogue: A new way to visualize the interactivity of commercial websites By Brook R. Corwin

For companies making their initial forays onto the World Wide Web, true interactivity is too often mistaken for passive dissemination of information. Simply putting print content up online does not create an interactive website, it merely recreates the static, one-way distribution of information that has been present for centuries. It neither encourages nor allows two-way conversations or a unique user experience, both of which are the hallmarks of online communication. A truly interactive website goes beyond what is commonly thought of as interactive, encouraging users to shape and control the content. These sites invite viewers to take ownership of the brand and start a productive dialogue in which the company actively listens and takes feedback into account. My COM530 group developed an info visualization that enables a company to see where along the interactivity spectrum their website falls. The Digital Dialogue Diagram, shown above, offers key questions for any site to determine whether it is merely a conduit

77


for static information or is instead a robust hub of interactivity. The examples below show the diagram in practice.

For this first example, we’ll take a typical department store retailer such as Sears or JC Penney. These major chains have websites with very static features. In answering the questions about audience participation and customization, the picture emerges of a website that can be edited somewhat for a more personalized shopping experience, but where the primary goal is simply to spread the retailer’s information of choice. There are no rating or comment systems that allow for user interactivity, and certainly no invitations for the user to contribute content.

78


This second example utilizes shopping sites such as Amazon or Overstock that compile a wide variety of products from different manufacturers. By filling out the questions of the diagram, these sites are revealed to be more interactive than those of department stores. They allow and encourage user reviews of each product, with a customizable interface catered to each user’s favorite types of products. There are still limits on the level of interactivity, however, as there is no opportunity to create content or start a dialogue with the company itself.

79


For this final example, the diagram looks at sites such as Threadless and Zazzle where users can create their own products and shop from those designed by other users. As the diagram shows, these sites are truly dialogic, as they encourage collaboration. Consumers aren’t asked just to buy, but also to make, their own products. The interface is built for new designs to be uploaded for purchase, and an active online community rates and comments on the different forms of user generated content. It’s a true form of interactivity, one that can be emulated by other sites hoping to reach the same level of dialogue online.

80


Top ten iMedia readings 1. Groundswell Groundswell wonderfully anticipated the rise of user-generated content and consumerownership over brands. Its words still hold truth today on involving customers in twoway conversations where they have an active role in shaping development and marketing of products. This represents a paradigm shift for many companies, and Groundswell lays the framework for this change.

2. If it Doesn’t Spread, It’s Dead Content created in Web 2.0 doesn’t live in a vacuum. It only dies in one. It either fades to anonymity or is spread to new audiences shortly after it is created, with every Internet user now equipped with tools to distribute their favorite content. This reading and accompanying lecture outlines this trend and demonstrates how it holds true online, important information for anyone creating online content.

3. Software Design Manifesto

Through much of the 20th century, software was designed only for computer experts. But as the PC was developed and the World Wide Web made the Internet available to the mainstream public, a need emerged for software with intuitive design accessible to all. Mitch Kapor was among the first to recognize this need in his “Software Design Manifesto,” which called for an integration of design principles into complex computer programs. This brought designers onto the same field as programmers, where they now play an essential role and new applications are developed with the goal of usability for a mass audience.

4. Tagging Content on the web is organized not through a top-down but a bottom-up process, with standardized categories often shucked for personalized and user-generated classification systems. This concept is fully grasped and described by Gene Smith, whose book “Tagging” explains the systems by which content is organized and structured online. His theories explain how we navigate and share the web, an important concept to grasp for anyone who plans on connecting in cyberspace.

5. Imagining the Internet Those with anticipatory knowledge of Internet trends and developments will be well ahead of competitors in their development of web-based content. Imagining the Internet forecasts these new trends and provides guidance on how technologies under 81


development will change the way we access and utilize the web in our personal and professional lives.

6. The Information Design Handbook The abundance of data now available at the click of a mouse can quickly turn into information overload, cluttering the user’s mind and desktop with content they can’t organize or make sense of. This increases the importance on good design that communicates information in an easily understandable way. The Information Design Handbook beautifully explains and illustrates how all forms of information can be visually communicated, sharing principles than can be utilized for almost any kind of project.

7. Everyware Computing is rapidly becoming ubiquitous, with the rise in mobile devices and highspeed data networks that make it easy to access the Internet from everywhere. This has profound social and cultural implications, as online information can start streaming and blending with the physical world. “Everyware” documents how this is possible through computer networks being placed inside objects familiar to the rituals of everyday life. These devices will mark a dramatic shift in the way information is received and utilized, leading to a new era of interactive media.

8. Socialnomics Social media has transformed our lives, the way we interact, and the way we do business. Socialnomics documents the ways new interactive media landscape can be utilized to an individual, organization or corporation’s advantage. It is valuable for its demonstration of the right and wrong way to leverage social media’s full potential.

9. First Principles of Interaction Design The computer interface has entered a new era of required accessibility along all platforms and audiences. New applications must be easily applied to mobile devices and be utilized by a wide range of demographic groups. “First Principles of Interaction Design,” document the fundamentals of effective and intuitive interfaces for software or the web. Adherence to these principles can make the difference between the mass acceptance or rejection of a new website or web application.

10. The Language of New Media The preponderance of interactive mediums has altered the way we communicate and the language we use. Understanding the culture of the web and the way value and reputation are formed is the first step in making sure an intended message is delivered in the proper context. Lew Manovich’s “The Language of New Media” explains these cultural shifts within the virtual space, translating the way discourse and commerce online have evolved and where they are headed.

82


Top ten iMedia resources 1. Google Analytics Among the most valuable aspects of creating online media is the way its viewership can be meticulously tracked. Google Analytics enables this level of detailed tracking for free, keeping tabs not just on how frequently a site is being visited but also where viewers are coming from, how long they’re staying and what they’re doing while on the site. This provides a valuable form of market research that is essential for conducting commerce online.

2. Delicious The abundance of information online can rapidly reach information overload without a personal system to categorize and store valuable websites immediately after they are visited. Delicious provides this organization, with a simple interface that let users tag sites by whatever category has value and meaning for them. Each tag then puts the site in a corresponding bookmark folder for quick and easy access in the future. There are a number of excellent tutorials on ways to use Delicious for PR, for journalism or for personal interests.

3. TED Amid the endless stream of opinions — some well-founded, some poorly informed — found online, a source of genuine expertise is essential. TED lectures bring the sharpest minds and most well-researched ideas and technology available for all. The lectures distill crucial information and demonstrate cutting-edge technology in an easily accessible format, making sure the ideas are not just heard but also understood.

4. Stumble Upon Google searches can turn up valuable info, but sites with poor search engine optimization will rarely turn out even if they have outstanding content. Stumble Upon provides a new way to search for sites on topics of the user’s choice based upon how they are rated by other users. Stumble Upon’s formula for sending users to sites is based partially on random luck but also in large part by whether each site was rated up or down by users. It provides a quick and easy way to find new informational resources for research on a particular topic without relying on Google.

5. WordPress There is no shortage of tools for creating and publishing a personal blog, but Wordpress takes the empowerment of bloggers one step further by offering an array of free widgets, designs and analytics that allow for the creation of entire websites. While wordpress.com enables the creation of blogs that can be personalized quickly and easily, wordpress.org

83


offers resources to create and host a custom site. These tools enable anyone with a voice and strong content to be heard on the web even if they don’t have a big budget.

6. Issuu It can be difficult to translate printed materials onto the web. Documents, photo albums, newspapers and magazines are produced in specific formats for physical production and their content doesn’t read well as a simple pdf posted online. This is where Issuu’s free services are valuable. The site hosts uploaded pdfs and displays them in a highly readable format where users can “turn” each page as if they were physically holding the document. This is a wonderful way to ensure that objects in print can reach broader audiences online.

7. Alexa Sorting through the abundance of websites in each category takes a keen understanding of which have the most credibility. Alexa’s wealth of information in Internet traffic provides quick snapshots of any site’s popularity for quick comparison to competitors. It breaks down this information geographically and chronologically for more in-depth knowledge.

8. Kuler Color placement and usage is a critical component of all good design. The Internet and self-publishing has made such fundamental design knowledge a prerequisite for all different types of communicators, and Kuler helps tremendously in this regard. The site provides an array of user-generated color schemes, complete with feedback, to give even amateur designers ideas on how to combine colors for the desired effect. This helps make a website or blog instantly more appealing and even credible by giving it a balanced, professional look.

9. Ning Popular social networking sites such as Facebook and Myspace have numerous uses, but their mass appeal makes them at times difficult to appropriate for use by a niche audience. Ning fills the void by allowing users to create and join their own social networks customized for a specific group/interest. This can leverage the communicative power of the Internet to connect people with likeminded views, locations or passions.

10. Smashing Magazine There is no shortage of information-rich publications covering the tech industry, but Smashing Magainze stands out by providing endless tutorials, tips and advice on how to apply the latest tools in media for specific objectives. By going to its site or following it on Twitter, Smashing provides a steady stream of useful information to help communications professionals recognize and utilize the latest trends and best practices in interactive media.

84


Top ten iMedia issues 1. Augmented Reality The rise in mobile devices and high-speed data networks is making computers ubiquitous in everyday life, making it easy to access the Internet from everywhere. This has profound social and cultural implications, as online information can start streaming and blending with the physical world. “Everyware” documents how this is possible through computer networks being placed inside objects familiar to the rituals of everyday life. These devices will mark a dramatic shift in the way information is received and utilized, leading to a new era of interactive media. Through augmented reality, it may eventually become hard to differentiate between the real world and the virtual world.

2. The Digital Divide Related closely to the knowledge gap theory, the Digital Divide documents the considerable chasm in technological access and education across the globe. As more information is diffused into society, higher socioeconomic status groups are gaining more knowledge compared to those in lower socioeconomic status groups, creating a widening gap in information levels. Without a stronger commitment to expand Internet access and improve computer education programs, advancements in information technology for some won’t necessarily translate to greater knowledge for all.

3. Spreadability/stickiness The way information, news and entertainment is distributed has been redefined by the way Internet users share content on a mass scale. Henry Jenkins’ lecture “If it Doesn’t Spread it’s Dead” outlines the way content either fades to anonymity or is spread to new audiences shortly after it is created, with every Internet user now equipped with tools to distribute their favorite content. This has redefined how content is replicated and multiplied through the Internet, with implications for anyone who hopes to build an audience online.

4. Open source development The pace of technological advancement has been accelerated to a great extent by open source development, in which computer code is shared and improved over the Internet. This remains the most democratic way of spreading knowledge and ensuring that new software is accessible to all, but there are competing movements to develop proprietary software instead and limit the sharing of knowledge on its design.

85


5. Folksonomy Content on the web is organized through a bottom-up process, with standardized categories often shucked for personalized and user-generated classification systems. This concept is explained through folksonomy, a system in which users create and manage tags to organize content. This concept explains how we navigate and share the web, an important concept to grasp for anyone who plans on connecting in cyberspace.

6. Net neutrality In some parts of the world, there is heavy regulation or censorship of the Internet blocking access to certain sites or content. Net neutrality is a model that would eliminate this divide and ensure that everyone with Internet access has the ability to access the same body of information. In order to ensure that no demographic group is left behind by technological advancement, it is imperative that restrictions on Internet use are lifted and that the full spectrum of web content is available to all.

7. Multiple identities The rise of avatars for use in virtual worlds, online gaming and social media represent a dramatic shift in our personal identities. These avatars may or may not represent our physical selves, and in the online media landscape it is possible to create multiple avatars very different from one another. This raises the question of whether those using interactive media will do so with a persona starkly different than their real-world selves, and how those multiple identities will be managed and utilized in different situations.

8. The Gift Economy The transfer of information from physical to online form has changed the relative value of many items and services. It is now more difficult than ever to contain intellectual property online. Monetizing content requires a new system of exchanges. The gift economy is one potential model taking shape, where services and information are exchanged in manner of reciprocation where online reputation and knowledge equate to wealth.

9. User generated content The news and entertainment we access on a daily basis no longer comes exclusively from professionals in those industries. A greater and greater percentage is user-generated, developed using new software and web tools and then shared through a robust distribution system online. User generated content now plays a critical role in shaping any marketing campaign or driving any news source. The companies and organizations that most effectively tap into this pool of talent stand to gain the most in the Web 2.0 and Web 3.0 eras.

86


10. Internet privacy Less and less personal information is being stored in physical form or even on personal hard drives, as this data is now migrating to the Internet where its security and accessibility are under question. Internet privacy is a rising concern, with so much sensitive information in places of easy access. Who has control to manage and utilize this information will hold tremendous power, and what steps governments choose to take to protect privacy will have profound implications on all aspects of society.

Top ten iMedia info visualizations 1. The Experience Cube Nathan Shedroff’s Experience Cube, presented as part of his Unified Theory of Design, evaluates different forms of communications based upon their elements of creativity and user control. It separates everyday ways of communication by the level of interactivity they demonstrate, giving the viewer a firm idea of the Internet concept. This visualization was developed before many current forms of high tech communication had developed, but it still provides a framework where new mediums can be inserted.

2. Ruder Finn Intent Index The uses and gratifications theory, which identifies how people are motivated to use particular communications tools to meet particular needs, represents a key aspect of understanding interactive media today. It is crucial to recognize why people go online and what their intent is while in cyberspace. The Ruder Finn Intent Index converts data on Internet uses into an attractive chart that shows the most popular uses for the web. This allows us to see patterns in Internet use and devise new web-based content and tools that will find a following.

3. The Conversation Prism The abundance of web-based tools can seem overwhelming and hard to grasp. The Conversation Prism makes order of this interactive environment by grouping such tools based on function. The graphic brings order to the multitude of competing services online for achieving interactive objectives, making it easy to see how the web can help us carry on two-way communications from the real world into cyberspace.

4. Power Law of Participation

87


It is important to recognize when evaluating interactive media that not all kinds of participation are equal in impact. Some visitors of sites and only lurk, while others add to the conversation and others collaborate to create new content. Ross Mayfield’s Power Law of Participation categorizes the various forms of online participation into levels of low and high engagement. The visualization demonstrates what actions online lead to collaborative intelligence, providing a guide on how to expand knowledge online.

5. The Forrester Research uses and gratifications chart Different generations have widely different approaches to using the Internet, complete with their own tasks that they accomplish online. Forrester’s uses and gratifications chart is valuable in that in breaks down by age the different actions taken online, demonstrating what generations are most likely to participate in different networks or create new contents. This is very useful if trying to reach a particular demographic, since it provides a revealing snapshot into their online behavior.

6. A Periodic Table of Visualization Methods There are a multitude of ways to present complex information visually. The Periodic Table of Visualization Methods provides a useful overview of the many different methods used in info visualizations, complete with examples of each method in practice. This is a valuable resource for creating visualizations by following models that have already been proven effective.

7. Throughout the Day on the Web Timing is everything when communicating online. The Throughout the Day on the Web chart on Tech Crunch displays what periods in the day activity is most robust on the Internet. It breaks down this activity by different types of tasks, making it easy to spot when the average person online would be most likely to engage in a particular web application. It is important to recognize these patterns for real-time communication since they highlight when people are most receptive to receiving new information.

8. User Experience Wheel Creating a site that provides active user experiences is a much more complex and involved process than just producing static content for the web. The User Experience Wheel outlines the many facets that go into positive user experience and the components that go into creating a site. Through this wheel, a site can examine all the things it must have in place to ensure active participation from its online viewers.

9. The Life of an Article on the Web The Internet has drastically extended the shelf life and the potential audience of any document. A compelling item can be spread and shared for years after its initial creation,

88


sometimes with an altered form or new context. Elliance’s The Life of an Article on the Web info visualization demonstrates this concept by illustrating the path articles can travel through cyberspace. It makes note of the many different routes an article can take on its way to consumption, all routes that can reach audiences not originally intended by the creator.

10. Web 2.0 Framework The outcomes of online communication in Web 2.0 are different than the original World Wide Web, and the process to get there is also unprecedented. The Future Exploration Network’s Web 2.0 Framework breaks down this process, detailing the various inputs and mechanisms that lead to recognizable outcomes. This is important to understand for users new to leveraging Web 2.0 for commercial purposes, as it illustrates the path their content must travel in order to find a viable audience outline.

Top ten iMedia theories 1. Unified Field Theory of Design Interactive media has opened up so many potential forms of communication beyond what was possible with static print. That has created a need for a theory that connects various forms of media to create a user “experience” that will hold value and inspire repeat viewings. Nathan Shedroff’s Unified Theory of Design, which is outlined here, wonderfully anticipated this need for organization and structure of information to effectively transmit not just data but wisdom.

2. Social Network Theory Social networks is now a ubiquitous term on the web, making it all the more imperative to understand the theory behind interconnected social circles. This theory examines the structure of individual relationships and attempts to expose how ties develop and illuminate the ways in which those ties effect social spaces. So many of our choices online are dictated by trends and recommendations within our social circles, making this an important body of research.

3. Uses and gratifications theory The Internet can now fulfill a multitude of life experiences previously available only in the physical world. That makes it more important than ever to grasp the uses and gratifications theory, which identifies how people are motivated to use particular communications tools to meet particular needs. With the rise of virtual worlds, all but the 89


most basic physiological needs could be potentially met online, and some will utilize the web to find social gratification or even self-actualization.

4. Knowledge gap theory Proposed back in the 1970s, knowledge gap theory stated that the more information is diffused into a social system, the more that higher socioeconomic status groups will gain knowledge compared to those in lower socioeconomic status groups, creating a widening gap in information levels. This is especially relevant today, given that the pace of technological advancement is so rapid that new mediums are constantly being introduced. This theory closely ties into the Digital Divide, explaining how advancements in information technology for some won’t necessarily translate to greater knowledge for all.

5. Diffusion of innovations theory Interactive media is constantly producing new tools and web-based applications to utilize in daily life. But many take years to catch on, and even the most popular today such as Facebook once had only a small following. This pace of adoption is explained by the diffusion of innovations theory, which categorizes five types of people who have different levels of willingness to accept new ideas from the media. These five levels of adoption are innovators, early adopters, early majority, late majority and laggards. Understanding these categories helps us predict the speed in which new media catches on with the public, or how long it will take for new technology to be integrated into daily life.

6. Spiral of silence theory Online communication, in which opinions are cloaked anonymously and sharing views takes place in real time, usually provokes fiercely strong opinions. This makes it important to understand spiral of silence theory, which examines how unpopular opinions will often go unexpressed because of a desire to fit in and not be isolated from the group. The minority opinion than slips down the spiral of silence and out of public consideration.

7. Agenda setting theory The media don’t tell people what to think, but they do tell people what to think about. With so many different kinds of media now available at everyone’s fingertips, the agenda setting theory takes on added importance. Increasingly under question is the role of new media technologies that diffract media influence and give power to individuals through wikis, blogs and video-sharing sites. New media will still frame key issues for consumers, but in a much more disparate and diverse way than before.

8. Propaganda theory Propaganda theory was once tied to widespread media campaigns, often associated with totalitarian governments. But today, as campaigns are user generated and coming from all different kinds of sources, it is important to recognize the various components of propaganda and how it can be applied for worthy goals. The seven common devices of 90


propaganda, as identified in “The Fine Art of Propaganda,� are the following: name calling, glittering generality, transfer, testimonial, plain folks, card stacking, and band wagon.

9. Media richness theory Today it is inaccurate to put all kinds of media in the same boat. Some offer a robust, interactive experience that is an altogether different category than static forms of media. The media richness theory recognizes this discrepancy, and assigns different values to mediums that have more interactivity. It points out the greater degree of effectiveness in communications that are two-way and conversational, as opposed to mass mailings or impersonal documents that are unaddressed to a specific audience

10. Media ecology theory There is so much media existing today on the Internet, that it is crucial to grasp how they all interact, complement and contradict one another. The Media Ecology Association, which advances the ideas of Marshall McLuhan, focuses on the interplay between different mediums. It identifies hundreds of different mediums and groups them by categories, studying which support one another and which are dependent on one another for survival.

Top ten interactive media thinkers 1. Tim Berners-Lee The Internet began as a collection of networks accessible only to those with advanced computer infrastructure and knowledge. Tim Berners-Lee played a huge role in democratizing the Internet and is now credited as being the father of the World Wide Web. He had the foresight to create the first web browser in Mosaic, which turned the data of the Internet into a graphical form users could navigate and control, setting in motion the path towards robust interactive media online that we enjoy today.

2. Marshall McLuhan

McLuhan’s research into communications theory came during the 20th century, when just a few mediums dominated the way people receive information. Yet McLuhan still had the foresight to envision a complex media ecosystem in which different forms of

91


communication buttressed and supported one another. This led to the Media ecology theory, which is a field of study highly applicable to how different mediums coexist in complex environments today. Highlights of his work can be found here.

3. Douglas Rushkoff The spreadability of media has redefined the way we receive and share content. Rushkoff had a keen insight into this back in 1994 when he authored the book “Media Virus” that described the way ideas and messages can spread online, often without the creator’s consent. This led to the concept of memes, which is behind much of the content that goes viral on the web.

4. Mitch Kapor In the early days of the Internet, software was designed only for computer experts. But as the PC was developed and the World Wide Web made the Internet available to the mainstream public, a need emerged for software with intuitive design accessible to all. Mitch Kapor was among the first to recognize this need in his “Software Design Manifesto,” which called for an integration of design principles into complex computer programs. This brought designers onto the same field as programmers, where they now play an essential role as new applications are developed with the goal of usability for a mass audience.

5. Henry Jenkins The mass sharing of content by Internet users has redefined the way information, news and entertainment is distributed. Henry Jenkins was among the first to recognize this trend. His lecture “If it Doesn’t Spread its Dead” outlines the way content either fades to anonymity or is spread to new audiences shortly after it is created, with every Internet user now equipped with tools to distribute their favorite content. In the process he has helped redefine how content is replicated and multiplied online.

6. Gene Smith The organization of information on the web is very much a bottom-up process, with standardized categories often shucked for personalized and user-generated classification systems. This concept is fully grasped and described by Gene Smith, whose book “Tagging” explains the systems by which content is organized and structured online. His theories explain how we navigate and share the web, an important concept to grasp for anyone who plans on connecting in cyberspace.

7. Adam Greenfield The age of ubiquitous computing is rapidly approaching, with the rise in mobile devices and high-speed data networks that make it easy to access the Internet from everywhere. This has profound social and cultural implications, as online information can start streaming and blending with the physical world. Adam Greenfield explains how this trend is possible in his book “Everyware,” which documents how computer networks will be placed inside objects familiar to the rituals of everyday life. These devices will mark a dramatic shift in the way information is received and utilized, leading to a new era of interactive media. 92


8. Tim O’Reilly The government and political ramifications of a free and accessible Internet are profound. Tim O’Reilly is at the forefront of these issues, leading government into the Web 2.0 era and tirelessly advocating for free software and open source development. O’Reilly has a firm grasp on how Web 2.0 will alter commerce and politics and has a knack for spotting trends in the development of interactive media. His publishing company, O’Reilly Media, spotlights some of the most innovative ideas and thinkers of our time.

9. Vannevar Bush Few had the foresight in the 1930s to imagine a network like the Internet where billions around the world could access and share information, but Vannevar Bush came very close to predicting the concept when he wrote about the “memex.” The theoretical device stored information and data from the full spectrum of human knowledge, allowing users to call up that data with the click of a button. This provided an important precursor to future research that developed the Internet we enjoy today.

10. Nathan Shedroff In the early 1990s, long before interactive websites became the norm, Shedroff anticipated the importance of a unified design that turns data and information into tangible knowledge through a memorable user experience. Shedroff proposed an entire theory dedicated to this objective, and his research continues to weigh different forms of media as passive and interactive to determine the best way they can co-exist.

93


Theory and Audience Analysis