Table Of Contents Abstract Part I: Where Digital Technology Came From, How and Where it Spread, and its Psychological Effects Chapter I: Origin Myths of Digital technology ELIZA, The Genie is Out Research Dollars and The ‘Demo Or Die’ Ethic Defining The Ambiguous Information Society IT and Its Effects On Labor In Industrial and Pre-Industrial Countries The Dark Side of The Information Society Chapter II: How We Overcame Our Fear of the Computer and Learned to Love the Network The Brain Network – Banishing Meatloaf Theory The Web – The Extension of The Brain The Tell Tale Statistics What Cost? Loving the Web Tech Du Jour – New Social/Political Uses For The Web Part II: The Web Invasion and Controversy One Instance How the Mass Media Failed to Realize The Promise of the New Technology Chapter III: The Intellectual Property Controversy The Intellectual Property Controversy Appropriation and The Creative Process Napster-Style File Sharing and The Beginnings of The IP Music Wars The Darknet – The IP Underground Litigation and the RIAA DRM – Digital Rights Management Software Chapter IV The New Media Case Studies The Creative Commons – A Small Step Towards Restoration of The Public Domain and The Resolution of Conflict Between Big Media and The Public Wikipedia – One a Structure for Democratic Media Conclusions Internet Usage Figures Works Cited
3 Abstract Part I shows how information technology (IT) has overwhelmed the employment dynamic in the international economy. Despite the social problems of technological advancement, there is an unrelenting pressure to research and develop IT without a significant understanding of the effects. Part I starts by examining the mythology and origins of the computer, the politics of research, economic pressures that accelerated its development, and the proliferation of the resulting technical/political/economic â€œnetwork.â€? IT had its first substantial effect in industry where the benefits of networked efficiency created a rapid spread of a multi-layered global political/economic network. IT intensified postcolonial trade and created ripple effects in all levels of the international workplace. Part I identifies the effects of the early computer and network on the psychology of the individual, and how those effects came to be accurately assessed by social researchers. Part II shows anecdotally how corporate mass media and popular, new media have collided in the economy and how the established media corporations, which were quick to adopt IT, have not have the vision to adopt the Web and new media in their market strategies. Creative individuals, not established corporations, have invented viable democratic forums for New Media on the Web. The new IT that was created and was used by industry, led to the creation of the World Wide Web. The proliferation of the Web into popular culture has been the equivalent of the IT invasion onto the world economy. Just as IT has enabled business through the creation of multi-layered economic networks, the Web has facilitated the building of entirely new social/cultural/economic networks. New Media is one of the building blocks of these popular culture networks. New media and the Web have created an entirely new balance of power between the
4 public and corporate world, the latter has been traditionally represented by mass media. Part II concludes with examples of how two individuals have found imaginative new ways of creating a public forum for new media expression, forums that embody the illusive democratic ideology of the World Wide Web.
5 Part I
Chapter I: Origins myths of Digital Technology Digital technology was the result of engineers’ need to perform basic tasks, to store and retrieve data, and to insert data into formulas and process it. The conversion of information into a numeric binary system of zeros and ones is thought to be derivative of ancient eastern philosophy. Philosopher and mathematician, Gottfried Wihelm Leibniz, linked the Hindu concept of dualism and the Chinese idea of yin-yang to the binary system as early as 1666 (Redshaw). Leibniz experimented with a binary system in an attempt to reduce thought to an easily read “universal language (Redshaw).” At the time, his idea was too esoteric to be used for more then quasi-mystical musings. Today, the same binary system is the core of the functioning of the modern computer. The digital binary system forms a true-false/on-off language for a computer’s processor. The processor can alter or make perfect copies of any data set simply because it converts information into a reproducible language of zeros and ones. Often, the word digital is associated as a concept opposite of analog. For example, a ball rolling down a hill is analog; it descends in a smooth indivisible motion. A ball rolling down stairs is the digital equivalent; it descends in discrete incremental steps. The computer transforms analog phenomenon into incremental mathematical steps. These steps are so small that they appear to have the same characteristics as the analog variation after which they were modeled. Once an object is converted to binary numbers or digitized, the computer can alter, recalculate, or reproduce it as a set of numbers.
6 The computer is the tool that fulfills Leibnizâ€™s original attempt to use a binary code to translate complex systems of information into a universal language. This language enables the computer to mimic human thinking in a way which most people had not been accustomed. The idea of a machine that mimics and/or competes with the human brain and has the potential for autonomous action is a both a wonder and a threat to human social well-being. Could a thinking machine become the next intellectual atomic bomb, or is it a faithful ungrudging companion? The notion that a computer may be able to reproduce human thought threatens to confirm the hypothesis that thinking is a mechanical process suggesting the possibility that the human brain could itself be a soulless mechanical device. This concept found its way into the postmodern fiction of the 1980s as the personal computer was becoming popular. The film, Blade Runner (1982), a futurist drama, contained plot scenarios with dangerous super-human machines. Fear of the proliferation of digital technology and runaway technological development was further reflected in this film where only complex psychological testing could differentiate humans from machines. Earlier examples of how new computer technology incited fear and paranoia exist. In the mid-1960s, a particular demonstration of human-like machine interaction made even the educated population apprehensive of unfamiliar emerging technologies.
ELIZA, The Genie is Out In 1964, while working on a Defense Department supported project at the MIT computer labs, Joseph Weizenbaum created a program called ELIZA. ELIZA was an
7 “artificial intelligence” project that could read written language and formulate a scripted response. Weizenbaum spent a few months assembling procedural code that would breakdown a subject’s questions using grammatical rules and then respond with a given script. One such script was the “Rogerian psychotherapist.” This popular program could be easily demonstrated to non-professionals and copies of it were widely circulated to academic institutions. The program soon became known simply as “DOCTOR.” DOCTOR had a very limited ability to sense contextual information about a subject and therefore no legitimate therapeutic value. However, some psychiatrists felt the program could be put to serious clinical use and expressed these opinions in trade publications. Weizenbaum was shocked at the response the program created and wrote, “I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it (Wardrip-Fruin 370).” At one point his secretary even asked him to leave the room so she could continue a private conversation with the DOCTOR. Weizenbaum made the following statement to deflect techno-distopic criticism: “For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself "I could have written that (Weizenbaum).” The corporate sponsored MIT computer-technology think tank is still representative of much of the contemporary research and development. Programs like DOCTOR are developed in a technical isolation that does not consider the social implications of technical development.
8 “At the MIT Media Laboratory...the academic slogan 'publish or perish’ has been recodified as 'demo or die'... When we started the Media Lab, I kept telling people we must demo, demo, demo...Forget technical papers and to a lesser extent theories. Let's prove by doing.”—Nicholas Negroponte, (Lunenfeld). The program DOCTOR is one small but interesting example of how digital technologies have the potential to affect humanity and how their development can occur in isolation from the understanding of the social effects that these technologies can produce.
Research Dollars and The ‘Demo Or Die’ Ethic Technical advances in information technology such as those created at the MIT Media Lab are financed through government and corporate donations. These advances can have significant effects on the industrial, political, and financial competitive abilities of every country in the world economy. The competitive flow of resources (money and commodities) throughout world markets creates a demand for ever-greater technical innovation. The effects of this innovation extend into every aspect of the economy. Technical innovation influences international communication, political processes, world financial markets, and determines how economic advantage is distributed in the world economy. IT advances also create significant changes in the working environment, affecting all levels of employment. The corporate sponsored MIT computer-technology think tank is still representative of much of the research and development that goes on today. “So pervasive is the computer industry's demand for the demo that often, people, who have neither the technical facility to bear up under the
9 pressure, nor an appropriately performative personality, are drafted into demoing products (Lunenfeld).” The top 20 U.S. research labs spent $42 billion in 2004 alone (Winter). With the large amount of money at stake, it is understandable how a high-pressure atmosphere could exist in the research field. In this way, the demands of competitive industry have dictated the direction and speed of economic invention and adoption of information technology.
Defining The Ambiguous Information Society The idea of a society subsisting on something as abstract as information is strangely inconceivable for a species whose very imperative is to eat, farm, hunt, gather, and sexually reproduce. The use of the phrase “information society” is understandably rhetorical, a reference to changes induced by the digital age as they contrast to the traditional industrialization. The concept of the information society is the subject of controversy in academic discourse and a point of occasional polarization in popular culture. Rust belt inspired critique maintains that information can never be a substitution for farms and factories and services. The concept then begs an explanation by the very juxtaposition of the two words. What does the term information imply or fail to express by its use or overuse in the popular lexicon or even in academic social theory? Today would Carl Marx see information as a “means of production” or a commodity that possesses “exchange value?” The general meaning of the term information has for centuries remained largely unchanged. In the last thirty years, the term has inherited a radical new usage as digital technologies have become popular. Phrases such as “information overload”, “information
10 amnesia”, “information bombing (Valovic 60),” and “information society” are used with varying degrees of frequency. What is it that is important to understand from use of term information society, is it a simulacrum or does the term represent some form of social or economic value? In the book, The Rise of the Network Society, author Manuel Castells attempts to define the term information by contextualizing it more specifically. He shows that information develops value when it can be exchanged rapidly, over distances, and when it is exchanged through technological networks to create economic productivity. The difficulty, as he states, is that information is not generally accounted for in statistics that measure economic production; it is not measured in terms of direct expense and income. Castells maintains that the first factor in determining global economic competitiveness is “technical capacity (Castells 103),” which includes science-based production, management, research and development, human resources, and most importantly, the networking of new technologies into a “science-technology-industrysociety-system” (Castells 103). Therefore, the terms “information society” or “information technology” become more significant when they are viewed as a form of “network.” A network is not only a physical linking of computers, but also an integrated political/economic/infrastructural “system.” Even though the effect of this system is difficult to quantify in terms of statistical numbers, it is an inseparable part of the measure of economic productivity. Castells proceeds to link other factors such as efficient production, access to affluent markets, and internal political capacity (efficient government) to a country’s productivity or “competitiveness,” factors which are influenced by an effective network of technological capability (Castells 103). These concepts begin to explain how information is actually a technology and a system and not
11 just an abstract, intangible idea. This system is networked internationally, and through it, every form of economic and cultural productivity is enhanced. Further, businesses that have advanced information technology also have a greater capacity to innovate, correct errors, get feedback, adapt, and manage workers (Castells 243). Understandably, the pressure to obtain greater networked IT capability is an international economic constant. The system becomes self-expanding; it extends both locally and globally, financing itself through a technologically facilitated network of global capital. “Global competition triggered a technology/management race between companies all over the globe (Castells 242).” The networked global capital to finance the “techno-fication” of industry occurs not only for businesses, but also in varying degrees, to the entire political, economic, and cultural infrastructure of a society. “From these networks, capital is invested, globally, in all sectors of activity (Castells 472).”
IT and Its Effects On Labor In Industrial and Pre-Industrial Countries Understanding IT and its effect on economic culture gives only a mechanical perspective of the resulting global transition. These social effects are usually studied ex post facto as academic discourse and are not considered when new technology is being developed or implemented. Over the last 35 years, complex global cultural changes have been imposed on the structure of the labor force in industrialized countries. These changes, the result of increased IT capability, have been created by instant global communication and an integrated working coordination of monetary, political, and institutional bodies. Resulting workplace changes include: an increase in the proportion
12 of a part-time workforce, an increase in personal career fluidity, national and international social stratification of workers on the basis of skills and knowledge (with a corresponding increase income inequality), an increase in service related and selfemployment, a decrease of men and corresponding increase of women in the workplace, a concentration of growth in manufacturing and agricultural industries in non-industrial countries, and a corresponding concentration of growth in managerial/informational/retail jobs with a shrinkage of industrial/agricultural jobs in industrialized nations, the later being a result of the global off-shoring of jobs by multinational corporations (Castells 216, 231, 444). IT has facilitated rapid redeployment of assets, capital, and business resources, including human resources to any international location (Castells). The incentive for this redeployment is the exploitation of low-cost unskilled production labor. Using IT resources, factories can be moved to low-cost locations with the added advantage of having little or no â€œproblematicâ€? labor or environmental regulations. More recently, the jobs that are subject to offshore outsourcing are increasingly on the higher end of the skills spectrum. Computer programming and technical/customer support jobs are particularly vulnerable to being contracted out to areas of the world with significantly lower labor rates. Business Process Outsourcing [BPO] will overshadow and incorporate IT outsourcing and mainstream BPO expenditure is likely to grow worldwide by 10 percent a year from $140 billion in 2005 to over $220 billion by 2010 (Statatistics, Logica CMG). International outsourcing is not without its complications and, for now, the trend is likely to remain unpredictable. The survey of 25 large organizations with a combined $50 billion in outsourcing contracts found that 70% have had negative experiences with outsourcing projects and are now taking a more cautious approach. One in
13 four companies has brought outsourced functions back in-house and nearly half have failed to see the cost savings they anticipated as a result of outsourcing (Statatistics, Deloitte). Net loss due to offshore outsourcing of administrative level jobs in industrialized nations is relatively small. The social impact in underdeveloped countries, however, is significant. There, rapid technological change and modernization is irrevocably altering third world cultures as the following quote shows: Said Jack Freker, president, customer management group at Convergys, the nation's largest publicly traded outsourcing firmâ€Śthe first time he went to India he was amazed to see women and children with buckets of rocks on their heads, removing fill from trenches for fiber optic cable. Now, just four years later, Freker sees women and children not with rocks, but with cell phones, strolling the streets and shopping at malls that have sprouted from vacant fields. Their cultures are undergoing wrenching change, but they are much wealthier (Inside Outsourcing).
The Dark Side of The Information Society The benefit of imposed industrialization has been the subject of controversy since the industrial colonization of Africa in the early 20th century. The ideological dialectic pits two points of view against each otherâ€”the benefit to third-world economic infrastructure and standard of living, versus the colonial dependency on world marketcontrolled cash cropping, resource extraction, and international debt service. With IT outsourcing a neocolonial dependency on IT subcontracting develops. IT dependency, like colonial commodities production dependency of the past, is affected by the notoriously unstable prices of commodities on the world market. The net results, unstable incomes and a stratified workforce, are similar to the effects that have been experienced previously by the workforce in industrialized countries.
14 Sociological and cultural perspectives on rapid IT industrialization have been expressed by activist/artist, Keith Khan (co-creator of the play Alladeen, 2005) these perspectives explore the cultural ramifications of globalization. Khan’s reply to Freker’s comments at a recent conference was: What is shocking to me about this is the culture shock, he said. Alladeen explores cultural eradication, which is happening in other countries as a result of the application of technology. Schizophrenia is one of the largest growth industries in Bangalore…(Inside Outsourcing). International technical advancement in third-world countries is mirroring earlier changes of the industrialized world, in the former case the rate of progression is rapidly accelerated. Rapid cultural changes in the third world brought about by outsourcing will probably not be completely understood for years to come. Since underdeveloped countries have a higher poverty rate and often a more pronounced class structure, social stratification may be seriously intensified in these countries. The term “two Indias” found in popular culture, has become representative of the widening gap between the social classes in that country. This quote from Michael Carter, the bank's country director for India underscores the difficult social conditions brought about by sudden economic development “…the nation occupies two worlds simultaneously. In the first, economic reform and social changes have begun to take hold and growth has had an impact on people's lives. In the other, citizens appear almost completely left behind by public services, employment opportunities and brighter prospects. Bridging the gap between these two Indias is perhaps the greatest challenge facing the country today (Statistics).” Regardless of the social costs of international outsourcing, it has hastened the international spread both high-speed communication lines and the technological infrastructure that is necessary for the spread of network-based technology. India has the largest share of outsourcing services; the progression of the level of these services far
15 risen very quickly as the following quote from Ron Hira, Assistant Professor of Public Policy at Rochester Institute of Technology demonstrates: …off shore outsourcing or off shoring extends well beyond the call centers that have gotten so much attention. India and other countries can now offer Internet based services that range from chip design to radiology to legal research (Program of). High initial costs of IT infrastructure, research, and development are funded by corporate interests, the only sector where the scale investment can be made. In industrialized countries the economies of scale have lowered the cost of computers and networking infrastructure. Lower costs have opened technology to the consumer markets, which, in turn, has increased manufacturing capacity of IT, creating even lower costs for these resources. The ultimate result is that the huge third world IT infrastructure building is occurring at an unprecedented pace. Third-world populations seem to have an endless need for IT. To reduce costs, developing countries have incorporated policies, and in some cases, milti-national collaboration for acquiring IT capacity. This quote from the Partnership for Higher Education in Africa shows a small part of the concerns for bringing “bandwidth” or network technologies to educational institutions in Africa: Acting together will improve the economies of scale to reduce the overall costs of bandwidth and improve the negotiating capacity of participating institutions…Increased enrollments, new ways of using ICT [information communication technology] for teaching, learning, research, and management will all require more bandwidth and cheaper ways to access it in the coming years (More Bandwidth). IT is shown to represent productivity through social/technical networking. IT innovation proliferates due to competitive economic pressures. The incorporation of IT into the workplace stratifies the working conditions and income differential of people in both
16 industrial and third world countries. Regardless of the negative social effects, countries appear to want to develop IT to make their economies globally competitive.
Chapter II: How We Overcame Our Fear of the Computer and Learned to Love the Network
The Brain Network, Banishing The Meatloaf Theory The realm of thought has long been considered sacrosanct; the invasion of that realm by a machine is simultaneously frightening and fascinating. The scientific mind and philosophical mind will and should be at odds in their views. Persons not awed by the power of science are a danger to society just as much as those who try to stifle its progression. Since early experiments with computer technology, the progression of the science has been very rapid, as has the practical implementation of the resulting technology. Recent research has shown the human brain to have a resemblance to network technology. If the brain can be shown to function like a network then a technology that is itself a network would seem to function as an extension of the brain. This similarity may, in some way, be responsible for the wide spread acceptance of network technology. Computer networks are both local and international. The most commonly used network is the Internet, which is comprised of millions of personal computers. â€œLocal networks,â€? limited in size, tend to be used in business and institutions. The Internet is a decentralized network, a worldwide connection of individual personal computers, a
17 complex and ever changing entity. The Internet is as much an abstract idea, expressed in technical code and imaginative new user applications, as it is hardware wires and programming protocol. In this way, the Internet mirrors the construction and functionality of the brain. In his book The Blank Slate, Steven Pinker writes the following about neuroscience and the analysis of the structure of the human brain “The mind is modular with many parts cooperating to generate a train of thought or an organized action. (Pinker 40.)” The human brain he states “is a network of…a hundred billion neurons linked by a hundred trillion connections (Pinker 197).” Pinker maintains that the brain, unlike previous popular theory, is not a uniformly constructed body (like a meatloaf) (Pinker 40), but “a system of universal, generative computational modules that are programmed by genetic and evolutionary forces (Pinker 40).” These programmed modules communicate to form the basis of any infinite number of combinations of individual behaviors and characteristics. How does this concept relate to the Internet? Essentially, the brain and the Internet are both communication-centric. From the start, communication in the form of email has been the dominant use of the Internet (Leib). As Internet technologies and their social uses develop, the forms of communication have grown more sophisticated as subsequent chapters will explore.
The Web – The Extension of The Brain The World Wide Web (Web) is a sub venue of the Internet created to make the Internet more user-friendly by combining graphics and sound in its interface. The Web was quickly adopted into popular culture, in part, because, it consisted of already familiar
18 components of technology. The Internet “evolved” in a similar manner to the human brain by a process that both engineers and social theorist call “convergence.” Convergence is a term used to express the blending of existing technologies in a new and here-to-fore unanticipated way to form new and often very different results. Some of the technologies that converged to form the computer are older and some are relatively new: the existing binary numbering system was used as a processing medium, cathode ray tubes from television were used for displays, and a keyboard configuration was from borrowed from the typewriter. The Web, the next logical popular step, was a result of further convergences of existing technical and cultural ideas. The Web’s content came from popular media, including but not limited to, existing graphic arts, multi media, radio, television, photography, and film (Remed Bolter 101). Its implementation came from existing computer technologies, telecommunications systems, and the new Internet technologies. Both the Internet and the Web incorporate digital binary code written by programmers that animates the hardware and has a similar function to the genetic coding (that forms the structure of the brain). Similarities between the inner network quality of the human brain and the Internet mean that humans are, subconsciously, intimately familiar with the process we experience in the Internet and Web. What we are not familiar with is a box full of processors, wires, and codes that has the frightening potential to become just like us–thinking and speaking autonomously. Philosophically, the computer is a threat to our autonomy. As a physical object its actions have the potential to surpass human capability and also to expose humans as common machines without existential substance. The fear of computer technology is a reflection of the fear of the modernist concept of
19 technological determinism that technological development will always trump human will. In a practical sense, we do not tend to think of students or workers of being terrified of their laptops. Such terror might be imagined however, for the supercomputers of the U. S. National Security Agency and the fear of the sprawling panopticon that could affect modern culture. The network, a familiar abstract concept, is somehow overlooked as a threat and has almost invisibly become an extension of human psyche, enabling us to connect instantly to create a global human network. Through this network, humans are empowered through their connection. Therefore, the Internet has been embraced and seen as a friendly medium we voluntarily, and at our own cost, bring into our home.
Tell Tale Statistics The Web has achieved a pervasive social acceptance, which can be shown by its exponential growth. The following statistics track growth since 1997 and show percent of usage in the U.S. Nielsen//NetRatings, the global standard for Internet audience measurement and analysis, reports that nearly 75 percent or 204.3 million Americans have access to the Internet from home (Nielsen/Netratings). That figure rises to 80.9% in the age group 35-54 (Nielsen/Netratings). Although this number shows that the Internet appears to be achieving significant popularity, the number itself is not an indicator of the kind of the social effects created by the Web. The Web comes in a multitude of changing forms. Shifts in use patterns are rapid and tracked frequently by companies like Nielsen (NetRating速) and Pew Internet. Current electronic data on the use of Websites shows that a significant amount of Web use is for obtaining
20 news and entertainment, other major uses include downloading of business and personal software and consumer purchasing (fig 1). These figures are deceptive in that they do not show total hours of use, only the comparative use of the most used Websites. The information is useful in that it is obtained electronically and not anecdotal or subject to margins of error. It shows relative use and gives a reference point to assess the accuracy of use patterns gleaned through interviews. Current statistics, published in April 2006 and based on user surveys, show that Web users appear to spend a great deal of time involved in communication. The most common use of the Web for email 63% of Web users responding to a recent Harris poll said they checked email frequently (Taylor). Social networking is an additional and quickly expanding use of the Web that attracts 47 percent of total Web usage (Gonsalves). At present, there 10 major Social Networking Websites the largest two are myspace.com and blogger.com. Social networking sites are used by their members to create personal Web identities. Sites provide templates for creating a personal Web site that includes biographical information, lists of friends, photos, lists of favorite books, films, and activities. Sites also facilitate communication, allowing viewers to see the subjectâ€™s relationship â€œtreesâ€? and to link or email directly anyone on the tree. These sites tend to be attractive to minors and young adults, they also strive to protect the minors who populate them and might otherwise lack special protection from adult predators. Determining just how Web use affects people has been a much more complex task, one that has proved difficult even for seasoned academic researchers. Statistics
21 presented on this have been obtained through user interviews and evaluate their use habits and social reactions. Even data from these sources suggests discrepancies.
What Cost, Loving the Web Of the studies concerning Web use, there are two areas of concern. The first is, what activities are compromised in order to spend time on the web and the second is what psychological effects does Web use have on the individual? Arriving at answers for the first issue has proved fairly straightforward. Time spent online is likely to detract from three areas of a person’s life—time spent with family members and time spent on discretionary activities such as hobbies and time spent watching television. According to the Stanford Institute for the Quantitative Study of Society: Time spent on the Internet is found to have a negative relationship with a number of daily activities, especially discretionary activities. Most notably, time spent on the Internet appears to come at the expense of time spent on social activities, hobbies, reading and TV viewing. Time spent on the Internet has a small, but less substantive, impact on time spent on work, childcare, housework and sleep (Nie 1). Answering the second question is a far more complex and subjective task, one that took years of refining. The first major landmark study was done in 1995 and received a critical response. A critique by James Selvin in his book, The Internet Society, first analyzed Internet use through a series of investigations to find an answer to the question of how it effects socialization. “The HomeNet Project” yielded results alleging that use of the Internet was associated with “…declines in family communication…declines in local and distant social circle…increases in loneliness…increases in depression…and disengagement with real life (Selvin 167).” Even though researchers cautioned that
22 association does not imply causation, Selvin states that the study hinted to a “technological determinism” (Selvin 169) or, in more popular terms, fear of the over determinate nature of media technology. These fears, as subsequent studies would show, were built in to the study by researchers’ limited understanding of key variables in the new technology including: the diversity of how the Internet is used, the demographic variables among users, and most importantly, variations in the effects of Internet use in the individual over time. Selvin felt the study’s weakness lay in its definition of new technology that implied its use was monolithic and presumed an oversimplified binary relationship between people and the new technology—“Internet life,” versus “real life.” In fact, types of Web use vary considerably as do the reactions to its use among individuals (Selvin 169). The study further reflected contemporary attitudes regarding the invasion and saturation of media such as television in the home. For this reason, The HomeNet Project tended to treat the Internet simply another passive media experience like television or radio. Studies of Internet use have been cross-sectional like the above, as well as longitudinal, measuring effects on the same people repeatedly over a given time. Nine years later, a 2004 longitudinal study called “Internet Uses and Depressive Effect” distinguished both individuals and use characteristics, drawing these more use-specific results: Dominant uses of the Internet—communicating with family and friends and searching for information—had no impact on depressive affect. Using the Internet to meet people was associated with increased depressive affect overall and especially among those with high initial social resources, but with reduced depressive affect among those with low initial social resources. Using the Internet for entertainment also was associated with reduced depressive affect. We suggest that individual differences in social
23 resources and choice of Internet uses may account for widely varying reports of Internet social effects (Bessière). Another study that reviewed the results of sixteen studies similar to the above and drew the following conclusion: … Longitudinal studies are far more credible. The ability to evaluate the same people over time mitigates several major threats to causal inference…When the survey examines the same people two or more [times] often, these participants bring the same demographic and other cross-sectional differences to both surveys, effectively controlling for their own cross-sectional variation (Shklovski). Longitudinal studies, fewer in number but more stable in their results, show that when an individual uses the Internet, that use predicts slight positive increases in social interaction and in friendships (Shklovski). The method of information gathering has evolved substantially since The HomeNet Project, as has the number of potential uses that are being invented for the Web. The amount of new uses that have surfaced on the Web since The HomeNet Project is substantial. Future studies will need to focus on more specific uses and differentiate specialized users in order to draw accurate conclusions.
Tech Du Jour, New Social/Political Uses For The Web Somewhere between radio, telephone, television, and “real life” the Web has invaded the cultural mainstream of everyday living. The future of this invasion may take any number of hypothetical forms. Three possible scenarios are: the pace of technological innovation will taper off and give culture time to adjust, the pace will remain the same and overwhelm societies with new choices, or the pace will increase and test the limits of
24 human adaptability. Regardless of the direction new technologies take, future theorists will need to have an integrated background to understand the political/economic/social/cultural impacts of technological advancement and the choices it poses to popular culture. Approximately 50% (Lenhart) of all teens and 26% (Horrigan) of all adults have created some form of media content on the Web, some form of altered, borrowed, or selfcreated art, music, or video. Many possible reasons for the popularity of using media in the Web exist: Social networking Websites have enabled the inexperienced user to quickly create a personal Web presence. New methods of compressing digital music and video files facilitates uploading media to the Web. The use of media on social networking sites has commercial and social implications. Users integrate media on their sites to define their tastes, create an identity, and sell or share content through their Web presence (Boyd). Ultimately, social networking Web sites simplify and structure the creation of an online personal identity. Another popular and much less structured venue for publishing on the Web are blogs (Web log), podcasts (Ipod Broadcasts), and the Wiki. These innovations are enabled by new software; they share common principles, but tend appeal to different users. A new technical lexicon has emerged up to help participants understand the nuances of personal expression unique to these tools. Below is a partial list of some of these terms and their definitions. “Bandwidth” - a multi-definition term that in Web language refers to the capacity of the Web network to transfer digital information. “MP3” - an extremely compressed audio file, often posted on the Web. “MPEG1, 2,3,4” - video files that are compressed, often posted on the web. “Downloading” - transferring of any file from a Website to one’s computer.
25 “Streaming Media” - viewing of audio or video on the Web without downloading it. Podcast “Podcast” - a site where personal audio and video files are posted for viewing. “Photofeed” - image podcasting. “Soundseeing tour” - podcast utilizing ambient noise and descriptions. “Vodcasting” - video podcastin) “VoiceCast” - podcast delivery through a telephone call “Content Delivery Network” - common service for delivering podcasts “Phonecasting” - creating podcasts using a phone “Autocasting” - the automatic generation of podcasts from text-only sources. “Podcatcher” - a list of software for subscribing, listening and watching of Podcasts “Mobilecast”- podcasting to mobile phones. “MP3 blog” - podcasting a single song. “Peercasting”- peercasting allows live streams to be redistributed by the viewers/listener, greatly reducing bandwidth needs for the originating broadcaster. “Podstreaming” - the process of converting streaming audio to a podcast. “Podjacking” - Using technical means to steal Podcast content. Blogs “Blog” - short for Web log, a Web site where personal serialized entries are made. “Blogosphere” - a collective term referring to all Blogs. “Blogcasting” - the blogging podcast. Wiki “Wiki” - a open access or limited access topical Website containing information that is editable, not only by those who have created it, but by those who have access to it. “Wikipedia” - a wiki encyclopedia where users may add and or make corrections to its content. “Wikiengine” - software that is used to create and maintain a Wiki. “Wikivandalism” - the deliberate act of adding inaccurate or inappropriate information to a Wiki (Podcast). If one could categorize the public’s use of the Web, it would fall under the concept called “new media.” The Web is rapidly replacing and/or augmenting its media predecessors the radio, telephone, television, and as has been shown, that, under some circumstances, it is a way of facilitating and enhancing personal contact. Just as IT has is has created a multi-dimensional, economically necessitated network or system, the Web
26 has become the medium through which popular networks are forming. In this way, the Web transcends the concept of media as it has been known and has become a media that combines or converges all past media into a â€œnew media.â€? As with the convergence that produced the computer, the convergence that produced the Web has created a media that is value is far greater then the sum of its parts. Through this new media, discourse of the future will have a greater opportunity to be mass-disseminated in a multitude of user selectable directions. Multi-directional discourse will be more open to public commentary and peer review. Future citizens, as potential media producers, will need broader cultural perspective to remain accountable for their expression, to maintain credibility, and to retain an audience for their ideas in a highly accessible media. The stake of understanding issues will no doubt be more significant as the power of the medium is made more accessible.
Part II - The Web Invasion and Controversy—an Instance of How The Mass Media Failed to Realize the Promise of the New Technology
Chapter III: The Intellectual Property Controversy Previous chapters have shown the origin and significance of digital network technology, how that technology has grown, and how it has become a popular medium for both individual expression and the consumption of modern culture. Much of the personal expression on the Web is accomplished with the help of two kinds of technological tools. The first are the various forms of visual, sound, or word processing software such as Photoshop, Pro Tools, and Microsoft Word. This software enables the easy creation as well as the appropriation of existing works. The second is the inexpensive and accessible publication of electronic content that has been made possible through the development and proliferation of sophisticated network technology. This electronic publication has made the appropriation of existing content highly visible and potentially profitable. These new technological tools are also considered “new media.” Unlike traditional broadcast media, new media does not imply only passive consumption of the homogenous cultural content—content that is created by a few elite individuals and promoted exclusively by big media corporations. New media offers an opportunity for anyone to be a media producer, consequently, this medium produces an exponentially higher variety of content and distributes that content to smaller, more specialized audiences. New media developments of the past few years challenged existing market
28 and ideological concepts of earlier twentieth century technologies. Established media corporations have felt the brunt of the challenges that have led to serious conflict over intellectual property (IP) rights. These conflicts have historical roots in the monopolistic practices of the big media companies. The appropriation of content from these companies has been enabled through technological innovation in network technology. This appropriation has not always been ethical, however, by any theory of economics, its existence is predictable. That is, big media corporations have artificially inflated prices creating a gray or black market that evolves as a correction mechanism. The increased popularity of the new digital means of production could be unprecedented, especially as younger generations mature using the creative language of new media. In the years to come, Web-based alternatives to traditional mass media will no doubt redefine how media is produced, distributed, and consumed, as well as its role in society. One of the main conflicts generated by the development of new media is the issue of intellectual property (IP) rights. This problem is being seen as a confrontation between the IP haves and the IP have-nots. The irony in this confrontation is that the technical means of production that was originally created by and for industry has subsequently been adapted for and marketed to the public. It is by distributing and profiting from the marketing of its very means of production, that industry is now being challenged. It is a classic example of Marxâ€™s prophetic statement that capitalism is the means of its own destruction (Marx).
Appropriation and The Creative Process “ Bad artists copy. Great artists steal” in a rhetorical quote from Pablo Picasso (QuoteDB). This quote is an exaggeration but it aptly illustrates one of the fundamental principles of creativity. Appropriation is one of the building blocks of creativity, creative content is built on the artistic discoveries of the past, and this is how the creativity has evolved over the centuries. A classic example of this is illustrated by the early animations of Walt Disney. In order to create his first sound synced cartoon Steamboat Willie (1928) Disney borrowed ideas from Buster Keaton’s film, Steamboat Bill Jr., (1928), and soundsync technology and ideas from the film, The Jazz Singer (1927). Prior to that, the song Steamboat Bill (1910), provided creative material for the Buster Keaton film (Lessig 22, 23). This derivative process of creativity would be impossible today without securing expensive rights from copyright holders. These rights are often held by large media corporations, who, like Disney, were built on derivative creativity. Ironically, the Disney Corporation is one of the more aggressive defenders of its IP rights. The stakes are higher today then, for instance, the early days of Disney. The value of copyright has increased dramatically with the advent of lucrative volume distribution through mass media and has been jealously guarded by global media corporations. Large political campaign donations have ensured the passage of extensions to copyright protection in the U.S. Congress, as well as the profits of the holders of those copyrights. The amount of material falling into the public domain is decreasing and consequently, new creative efforts do not have access to the culture of the past with which to build the efforts of the future. Legions of corporate legal departments act with bureaucratic efficiency to bring catastrophic lawsuits
30 to unsuspecting individual “violators.” Legislative actions have shown support for virtually perpetual extension of copyright protection. Technology, which made copyright such a valuable commodity, has also challenged the expansion of the industrial monopolization of creativity.
Napster-Style File Sharing and The Beginnings of The IP Music Wars One of the more publicized challenges to powerful established industries using digital technologies was brought about by the company, Napster. This company exploited a music sharing idea and capitalized it into a large enough scale to pose a perceived serious threat to the five global recording companies represented by the Recording Industry Association of America (RIAA). This industry had been protected for years by practice of quasi-monopolistic marketing. Artificially high consumer prices have been brought about by mergers, monopolistic practices and price-fixing strategies. This condition created an environment ripe for correcting innovation; market correction came as Napster. Napster’s idea was simple, to use existing MP3 (music compression technology) and the Web to enable the public to share music and for Napster to profit from this sharing. Napster’s plan was implemented by creating a sort of “switchboard” or a server to connect people who wanted to share music. In this case the music wouldn’t be listened to, as you would with a radio, but instead would be copied and transferred using MP3 digital technology. What made this idea radical was that there was no central “library” that stored all the music files, the server simply connected a person who wanted an MP3 to another person who had that MP3. This process has been used before in other
31 technologies and is known as a “peer to peer (P2P) network.” Napster kept no MP3 inventory so the feeling was that the company would be immune from copyright infringement since it wasn’t actually copying or distributing MP3s. The only central mechanism the company maintained was the server that could connect the users. In this case, file sharing, is vulnerable to legal action due to the necessity for the maintenance of a centralized server. In 1999, RIAA challenged Napster in court in a highly publicized trial, and was successful in forcing Napster to change their business model to a traditional inventory/sales mode, pay fines, and subsequently to pay royalties to the recording industry.
The Darknet: The IP Underground The next level to which the RIAA-new media conflict has escalated, is the further decentralization of network technology. The central server has been replaced by a loosely organized network of computers that is referred to by various names (peer to peer, P2P, and the Darknet). The current incarnation of P2P networks is held together by freely distributed software that makes every computer connected to the Internet a potential server. Companies who make the software such as “Grokster” realize their return profits through advertising. In this form of P2P network, people who want downloads are matched with people who can provide downloads, individuals participate without cost and there is no Napster style server. This P2P network is used for the distribution of many different types of content besides music. Its structure makes it more economical to transfer files and avoid the bottleneck of a central server. The percent of individuals who violate copyright laws in decentralized P2P networks is relatively small. Big media
32 companies have sought damages from Grokster, in an effort to shut down or limit their growth. The implication this technology has for the preservation of economical free speech was clearly argued by the attorneys for the Electronic Frontier Foundation (EFF), a non profit legal advocacy agency: By dividing the burden of content distribution among those who download, or consume that content, p2p technologies make it feasible for large files to be distributed extremely cheaply. Because the number of consumers is many, and the bandwidth cost of their consumption is shared, content that would be economically impossible to distribute through client-server distribution can be distributed using p2p practically freely. The architecture of p2p thus enables a kind of speech that would otherwise be economically infeasible (Metro). Precedents for this case can be found in the 1984 court ruling on the “Betamax case,” when the new technology of the time was the VCR. This case was brought before the U.S. Supreme Court by Disney seeking damages from Sony Corporation (an RIAA member). Disney claimed that VCR technology could be used to violate copyright law as the following brief states. The court ultimately ruled in favor of Sony. Petitioner Sony defended a technology that the Respondents claimed was used primarily to infringe copyrights. Brief of Respondents, Universal City Studios, Inc., and Walt Disney Productions, v. Sony Corp. of America Universal City Studios, Inc., 464 U.S. 417, n.114 (1984) (No. 811687) (claiming that “over 80% of all Betamax recordings consists of protected entertainment programs” and that less than 9% of the programs recorded were legitimately recorded with the permission of the copyright holder). Indeed, the proportion of infringement alleged in that case was greater than the infringement at issue in this (Grokster) case (Metro). In the Grokster case, the Supreme Court, in what was seen as a partial victory to the big media, sent the case back to the lower courts, ruling that P2P companies such Grokster could be held liable for the copyright violation that occurs on the networks. The
33 ruling was unanimous because the court found, unlike Sony v RK, that Grokster showed that they induced copyright violation as the following court brief shows: Evidence of the distributorâ€™s words and deeds â€Ś shows a purpose to cause and profit from third-party acts of copyright infringement. If liability for inducing infringement is ultimately found, it will not be on the basis of presuming or imputing fault [unacceptable under Sony - RK] but from inferring a patently illegal objective from statements and actions showing what that objective was (Metro). The court essentially created an inducement theory saying that Groksterâ€™s advertisers encouraged and profited (through advertising revenue) from illegal downloading (Koman).
Litigation and the RIAA The RIAA aggressive litigation strategy was preceded by a sharp decline in the growth of sales for the recording industry. The RIAA attributed this decline to file sharing on the Web. There is little evidence to support file sharing as the cause of a decline in growth, rather, it has been shown, to have been caused in part by increased prices. Increases were brought about by the RIAA minimum advertised pricing (MAP) policy. This policy required emerging big box discounters to increase their prices to match those of the smaller retailers. In the years prior to P2P sharing, 1992-1996, competition instigated by large retailers was responsible for reducing the price of a CD. Also during this period, the volume of sales increased by 371 million units. When the MAP policy was instituted in 1996-2001, unit sales increased by only 100 million units (Price). These statistics show a basic fact of economics, that prices and sales volume have an inversely proportional relationship. MAP policies drove up retail prices close to
34 the time that P2P file sharing of MP3s became popular. On May 22, 2003 a class action anti trust case was brought by the attorney generals from 43 states against the major recording companies to stop the practice of MAP. The case was decided in 2003 in favor of the states and $12.00 refund checks were sent to individuals in settlement class (Litigation). Lawsuits aimed at individual sharers have replaced the RIAA strategy of suing corporations. Suits are based on the questionable technical extraction of the names of individual users through network technology. The RIAA is dedicated to maintaining the industry’s advantage and to further its policy it has launched a “reign of terror,” according to attorney Ray Beckerman for EFF (Byfield). RIAA has to date sued 19,000 individuals including children and grandmothers (Byfield) for alleged copyright violations or infringement. The net effect is that P2P downloads of music on P2P networks is increasing by at least a factor of two each year (P2P Download). The irony of the RIAA suits is that “illegal downloads” of MP3s has been shown to have no effect on the sales of music through “legitimate” channels. A comprehensive study conducted by Felix Oberholzer-Gee, Associate professor at the Harvard Business School has shown that: …most downloading is done over peer-to-peer networks by teens and college kids, groups that are "money-poor but time-rich," meaning they wouldn't have bought the songs they downloaded…In fact, illegal downloading may help the industry slightly with another major segment, which Oberholzer and Strumpf call "samplers"—an older crowd who downloads a song or two and then, if they like what they hear, go out and buy the music (Oberholzer-Gee). Unlike previous studies in this area that were based on interview data, the results of this study were obtained from electronic monitoring of the relationship between sales and
35 downloads for 680 sample albums, they found that the effect of downloading on sales is…“statistically indistinguishable from zero (Potier).”
DRM – Digital Rights Management Software Inevitably, the recording industry would look for technical solutions for the enforcement of the right to profit exclusively from content sales. The DRM is an imbedded code that is attached to digital content to prevent copying or render the content unusable under any conditions that the seller desires. The legality of this form of private regulation is questionable in two areas of copyright law: 1) there is no provision for the expiration of the DRM when the copyright expires, and 2) DRM restricts the right of a legitimate owner to sell a copyrighted work (without retaining a copy) in a doctrine known as the “first-sale doctrine” which is part of copyright law (Digital). Overall, the RIAA strategy has been of little value to the media corporations and has created a war against the consumer. The model for doing business on the Web seems to be beyond the comprehension of the big media corporations. Changes in the way artists create and promote content will likely take its place in the market along with existing market structure, like the technologies that have preceded it, as radio and television have coexisted with their predecessors. The technique needed to function in the new media market is much different then the existing retail trade, some of the ideas that have shaped the new media market will be examined in the next chapter. These ideas are
36 an attempt to both improve on existing technologies and mediate the conflicts that have been exasperated by the big media corporations.
The New Media Case Studies This chapter will explore the development of new, more egalitarian uses of the Web and show how their presence has attempted to solve standing legal problems resulting from Web technology. Two organizations that will be examined are the nonprofit, Creative Commons, and the privately owned but egalitarian based Wikipedia. Creative Commons has attempted to solve the problem of the distribution of content in the Web and conflicts with the media establishment. Wikipedia has attempted to create an open forum for expression of content, one that is not vulnerable to the anonymous vandalism, often a problem for this form of expression. Both these organizations are ideologically and politically neutral, they also, in the Internet tradition, foster an open access but mediated form of expression. Because their founding concept is predicated on the resolution of existing problems, they have avoided the levels of controversy seen in Napster and Grokster; the former being founded on more of an opportunist ethic designed to advantage of IT related copyright vulnerabilities.
37 Since the invention of copyright in 16th century England, the law has provoked controversy between publishers who own content rights and those who do not (Lessig 88). Artists and writers rarely participated in this conflict, since it was the publishers who had the largest financial stake. With the advent of Web technology, artists have had the opportunity to publish and electronically distribute their content without prohibitive expense. As most artists know, the free or low cost distribution of content has been shown to be an excellent way to introduce one’s work to the public and the marketplace. This fact has some how escaped big media, even with the millions they have available to undertake market research. Media corporations are doing their best to prevent free “publicizing” of content in the P2P networks, and, through litigation, are attempting to alienate their market. The victimizing of individuals by big media is, in a smaller way, a classic example of the misdirected bureaucratic efficiency directed against the best interests of everyone. Such bureaucracy is reminiscent of the notorious Nazi Reichsicherheitshauptamt that efficiently exterminated hundreds of thousands of innocent civilians for years and on a daily basis. Of course the holocaust was horrific on a clearly incomparable scale, but the same principles are at work. The legal bureaucracy of the big media corporations, like the Reichsicherheitshauptamt, has become self-supporting financially and closed ideologically, it is functioning in direct opposition to the long-term health of its own industry and has become a threat to tens of thousands of citizens.
The Creative Commons – A Small Step Towards Restoration of The Public Domain and The Resolution of Conflict Between Big Media and The Public
38 Stanford law professor and author, Lawrence Lessig, has created a nonprofit,Web-based organization called Creative Commons (CC) (Conhaim). Lessig wrote Free Culture (2003), an extensive and defining work describing the nature of the copyright problem. He was also the council for a failed Supreme Court challenge to the “Sony Bono Copyright Term Extension Act,” a controversial law Congress enacted to extend copyright protection to 95 years for corporate copyrights. Opinions regarding the validity of this law fall into lines that are largely determined by financial benefit that the law protects. An excerpt from following article that appeared in the Washington Post sums up the politics behind its passage through both houses of congress. This degree of protection under which works from 1923 are still owned privately does little to promote science or art, but it does protect copyright holders who make big campaign contributions. Unfortunately, it also serves to keep material out of the public domain…(Love). Creative Commons agreements are supposed to provide alternatives to the standard government copyright. These agreements are hybrid contracts that contain a variety of “fair use” possibilities. Fair use is a legal term that, in the past has, allowed limited use or reproduction of copyrighted material. Examples of fair use would be quoting or reproducing content for the purpose of criticism or commentary. Typical CC agreements allow reuse of content for personal creativity, even the reuse that might be published. Reuse for profit is typically not allowed in CC agreements. Critics of the CC movement feel that these agreements are redundant and unnecessary, possibly even a threat to the existing fair use tradition. There is the fear that Creative Commons may become a necessary public standard that will cost artists money to subscribe to and, in effect, cause unnecessary exclusion. The most vocal critics of CC have been from traditional print media. Andrew Orlowski, a columnist, criticized CC licensing as
39 redundant to existing law and of little use to truly creative people except remix musicians (those who mix existing works to create new ones). Orlowski’s argument ignores the derivative process of highly acclaimed works that were discussed earlier. Fair use has traditionally been determined by court precedent or common law. Creative Commons attempts to give fair use a more precise definition, this would benefit artists, whose works maybe derivative, by more effectively insulating them from expensive lawsuits. Creative Commons agreements are posted on the Creative Commons Web site, http://creativecommons.org/ where links to artists’ works can be used to download, view, or post new work. The types of agreements are posted along with their content.
Wikipedia – One Structure for Democratic Media Wikipedia is one of the more revolutionary forums to be introduced to the Web. Unlike blogs and podcasts that mimic mass media, the Wikipedia, by necessity, contains the most sophisticated, layered administrative structure. Through the Wikipedia structure, public exchange is flourishing on an unprecedented world-wide basis. The Wikipedia serves a variety of functions but what sets it apart from more traditional media, both print and electronic, is its open forum content. Its open editable structure makes it both an object of criticism for its inaccuracies and an active changing discussion forum. Its name implies a function similar to an encyclopedia, however, in practice, its function is far broader as it draws participation from tens of thousands of individuals at both creative and administrative levels. Considering what is being attempted by this forum, it is little wonder that it has had such a flawed beginning. That it’s execution is at all viable is short
40 of miraculous Itâ€™s the Wikipediaâ€™s trial and error development that will be a crucial determining factor for its success, death, or replacement by a more effective mediated forum. The Seigenthaler controversy was one of the defining events that, not only shaped public opinion regarding the Wikipedia, but was also responsible for substantive changes in its procedural development. John Seigenthaler Sr., a journalist and member of the Kennedy administration, had biography on the Wikipedia. Shortly after it was posted, an anonymous hoaxer entered false information claiming that Seigenthaler was involved in the assassination of John and Robert Kennedy. The false entries in the biography remained undiscovered for several months. When he discovered the hoax, Seigenthaler repeatedly corrected the entry but the anonymous hoaxer replaced the false information. Seigenthalerâ€™s complaints to the Wikipedia staff were met with a lack of action. Such incidents were not unusual, Seigenthaler, however, was in a position to publicize his discontent by writing about the problem in USA TODAY. The publicity generated by this and more unfavorable reports in the media pressured the Wikipedia staff to make sustentative changes in the way the Wikipedia is managed. Unlike other new media controversies, this did not end up in the courts: the pressure brought by the negative publicity in the mass media was enough to bring action to solve the problem. One cause of the lack of administrative control was that the Wikipedia was expanding at a rate that was unprecedented, unmanageable, and unexpected by its founder Jimmy Wales. As a result of the Seigenthaler case, administrative changes were implemented by Wales, as shown in an excerpt from this Wikipedia article about the controversy: A new guideline, Wikipedia: Biographies of living persons, was created on December 17, 2005, editorial restrictions were introduced on the
41 creation of new Wikipedia articles, and new tracking categories for the biographies of living people were implemented. The Foundation added a new level of "oversight" features to the Media Wiki software, accessible as of 2006 to around 20 experienced editors nominated by Wales (Seigenthaler). The Wikipedia’s broad base of information is both the source of its strength and its weakness as a medium. John Seigenthaler Sr., who has been personally maligned by wikivandalism has become a vocal critic; he maintains that the medium is hopelessly flawed. However, the error rate of the Wikipedia is alleged, on average, to be only one percent more then in print encyclopedias (Giles). Obviously the use of either of these mediums should be undertaken with caution and not relied on as a sole source of information. The Wikipedia is estimated to have: “3.7 million articles in 200 languages…and added 1,500 new articles every day of October 2005 (Giles).” That is a far greater number of articles then any existing encyclopedia. With that number of entries, accuracy might pose a larger problem, but accuracy in maintained by an incredibly diverse network of active contributors that shape the Wikipedia’s content. Unlike traditional forms of print media that are mediated by anonymous, sometimes less accountable individuals, and influenced indirectly by hidden editorial or corporate interests, Wikipedia’s information and processes are more transparent. Content is not determined by the whims of sensationalist journalism. Facts that are distorted or misrepresented can be subject to immediate correction and public debate. The history of any controversy surrounding the correction of false information, unless it is libelous, is also visibly preserved in the structure of the Wikipedia. With 1.27 million articles generated in the last five years of the Wikipedia’s existence, 20 experienced editors could not, in any practical sense, monitor the content for accuracy. The monitoring also comes
42 from a multi-level volunteer staff. At the bottom of course are the millions of anonymous and registered users, next are 400 “administrators,” then the “bureaucrats,” then 57 “developers,” an arbitration committee, and finally there is Wales, who retains absolute power over the organization (Pink 129). A democratic ethic seems to be the organization’s modus operandi and most of the error correction is done by the registered users, a small percentage of whom become obsessed with making corrections. Volunteers and staff members monitor various “levels” of mischief; light misinformation is placed by so called “trolls,” or “pixelantes (Xizer)” depending the nature perspective of the contributor. More serious problems are caused by “vandals.” (Pink 128). It may take years for the Wikipedia to achieve any degree of credibility, because of its egalitarian structure and the lack of serious peer review, the Wikipedia will have an uphill battle before it is regarded as a standard reference. The idea will no doubt undergo additional development and may spawn new forms of itself—forms that could offer specialized, more authoritative information. For now the populist nature of this publication provides an appropriate medium in which to refine the concept.
43 Conclusions Corporate powers will no doubt seek to force an industry-weighted digital economy through a policy of excessive copyright legislation, DRM technology, and lawsuits that support artificially high content and use-costs to the public. These forced solutions, like the unmitigated free distribution of content enabled by Napster and Grokster, are problem-escalating initiatives that will guarantee both court dictated regulation and the use of the Web as an underground grey marketing tool. The content/copyright polemic is not just about the power struggle of industry insiders versus content creators, there are implications for a vast market of culture consumers. Statistics on the use of the Web in the U.S. show that media plays an important part in the lives of the consumer (Media), for example, parents who are balancing the demands of work and family responsibilities. Incredibly, a 2006 study by the Henry J. Kaiser foundation found that in a typical day in the U.S., about 27 percent of children aged [4-6 years] use a computer (Media). That figure grows to 73% (Burns) in children aged 12 to 17. A Harris poll shows that 63 percent of adults in the U.S. use a computer (Taylor). Further, the percent of total consumer spending that occurs online is steadily increasing as these figures for the last four years show: 2002/16%, 2003/20%, 2004/22%, and 2005/27%. A total of $30.1 billion was spent online in the U.S. just during the 2005 holiday season (Online). These statistics show an enormous potential market, and that the Web is becoming an indispensable necessity for the population as a whole. Internet purists who were involved in the Web from its start, lament that corporate interests are pushing the Web to become just another â€œmoney machineâ€? or â€œTV on steroids
44 (Schuler).” The economic use of the Web and its survival as a populist medium are not mutually exclusive. The needs of society at large drive the economic development of the Web as much as industry initiative. Today what is socially relevant, is not whether the Web should be an economic resource, but how the benefits of Web connectivity can be distributed equitably throughout world culture. The major barriers determining peoples’ use of the Web is cost, but cultural barriers also limit use of the Web. Studies consistently show that statistically low-income people have less of an understanding of how to benefit from the use of the Web. Ubiquitous economic development that industrialized society can afford to lament is the hope of the economically disenfranchised world that sits on the outside of the “digital divide.” The digital divide, as this barrier to IT has been called, poses a serious economic limitation. Being part of the digital-Web economic order is not just a matter of possessing a blog or podcast, or being able to download your favorite MP3, it is a matter of economic inclusion or exclusion. The stratification of the workplace, as discussed earlier, is a serious international economic problem; individuals and entire countries are being left out of the emerging network of digital economics. Low wage jobs pay hardly enough to survive the escalating costs of living that are endemic to the ever more sophisticated urban economies within the IT sphere of development. Low pay coupled with the irregular employment brought about by the IT economy, results in an emerging, international debt ridden class of chronically underemployed industrial poor. The industrial poor are only a part of the digital divide phenomenon, the fundamental technology needed to create an economic network infrastructure starts with telecommunication lines. Substantial numbers of rural populations live outside the IT
45 infrastructure and have never had regular access to as much as a telephone. There exists an increasing alienation of rural low/no income people and communities from the rapidly changing, digitally dependent economy. On average, one half the worldâ€™s population lives below the internationally defined poverty line, with an income of two U.S. dollars a day or less (Persisting). That amounts to roughly 3 billion people. These income levels often stem from jobs that have been exported from industrialized nations with the help of IT technology. The commercialization of the Web may, in fact, hold the only hope for transversing the digital divide. As was seen with earlier IT development, commercialization was what expanded the technology beyond business to popular use. If profit can be realized from expanding infrastructure and training users then the investment will be made. One vision that was popular in the early days of the Internet, was the promise of utopian social and economic parity, a discourse reminiscent of the utopian promises of the Soviet avant-garde in the 1920s and the Italian Futurists of the early 20th century. These utopian movements were squashed by Stalin and Mussolini respectively. Technological utopian consumerist ideology of the 1950s has also proven to be empty commercial hyperbole. If the Web is to become a benefit to society it will be because of its unique ability to become a low cost mode for the distribution of ideas. Historically, the economical dissemination of ideas on a worldwide scale has never been possible. It will be years before the implications of the drastic change in modes of communication can be properly assessed. Now, for the first time since the development of mass media, there is a prolific and relatively free world forum for a diversity of ideas. These ideas compete with crafted propaganda or the slanted public relations of the commercial media, both on and
46 off of the Web. Public ideas concerning issues such as homelessness, under employment, inadequate healthcare, international conflict, and environmental degradation, are orphan issues that traditional mass media industry has no serious vested interest in addressing. Web communication has no such indifference, and has shown to be able to bring all forms of public debate, news, and discourse, to a sophisticated and international level. New and unique forums for communication and mediated discourse have just begun to surface on the Web. These conceptually mediated forums are a convergence of software, written rules, and communication hardware; examples of these are the Wikis, blogs, podcasts discussed earlier. Through the structure of these forums, the Web has demonstrated that significant numbers of people can be engaged in an ongoing development of discourse in any area of interest. These forums are in their infancy and will be undergoing trial and error development for years to come.
47 Internet Usage Figures Fig. 1 Nielsen//NetRatings tracked the brands with the highest number of unique visits and time spent on their sites.
Top 25 Parent Companies April 2006 Parent Unique Audience (000) Microsoft 112,596 Yahoo 105,907 Time Warner 99,204 Google 93,715 eBay 60,050 News Corp. 55,736 InterActiveCorp 54,249 Amazon.com 44,765 Walt Disney Internet 39,554 0:33:20 New York Times Co 38,447 RealNetworks 38,246 Landmark Com 34,797 Apple Computer 33,548 Verizon Com 29,637 United Online 29,628 E.W. Scripps Co 28,847 AT&T Inc. 27,242 CNET Networks 26,487 Wikipedia 26,158 Adobe 25,297 Gannett 23,177 Viacom 21,767 Expedia 21,224 Daum Com 21,154 CBS Corp. 20,833
Time Per Person (hr:min:sec) 2:00:15 3:09:58 4:51:35 0:54:21 1:50:37 1:38:43 0:26:10 0:21:37 0:14:31 0:47:44 0:40:47 1:02:52 0:23:12 0:54:58 0:09:09 0:27:40 0:11:09 0:13:32 0:03:31 0:16:19 0:46:26 0:17:21 0:05:25 0:22:45
Source: Nielsen//NetRatings, 8
Home Web Use by Country, February 2005 Country
Australia Brazil France Germany Hong Kong Spain Sweden Switzerland U.K. U.S.
8,918,141 11,444,102 15,582,952 29,500,717 2,539,657 8,353,284 4,627,124 3,339,753 22,281,069 137,996,052
9,194,242 11,032,316 15,837,988 29,864,949 2,602,738 8,824,941 4,492,010 3,349,881 24,800,954 135,827,206
276,101 -411,786 255,036 364,232 63,081 471,657 -135,114 10,128 2,519,885 -2,168,846
3.10 -3.60 1.64 1.23 2.48 5.65 -2.92 0.30 11.31 -1.57
Fig 1. Burns Enid, Top U.S. Parent Companies and Stickiest Brands on the Web, May 2006. Clickz.com. 18 Aug 2006. http://www.clickz.com/stats/sectors/traffic_patterns/article.php/3617306
Works Cited Castells, Manuel, The Rise of the Network Society, Vol. 1. Oxford: Blackwell Publishers, 1996. Bauman, Zygmunt, Modernity and the Holocaust, New York: Cornell University Press, 1989. BessiĂ¨re, K., Kiesler, S., Kraut, R., & Boneva, B. (2004, Dec). Longitudinal Effects of Internet Uses on Depressive Affect: A Social Resources Approach. Unpublished manuscript, Carnegie Mellon University. Pittsburgh, PA. 18 Aug 2006. http://homenet.hcii.cs.cmu.edu/progress/research.html Binary Digits, 2nd Nov. 2005. BBC Co. UK, 18, Aug 2006 http://www.bbc.co.uk/dna/h2g2/A5771973 Bolter, David Jay, Gromala, Diane, Windows and Mirrors: Interactive Design, Digital Art, and the Myth of Transparency, Cambridge: The MIT Press, 2003. ---.David Jay, Grusin, Richard, Remediation: Understanding New Media, Cambridge: MIT Press, 1999. Boyd, Danah, Identity Production in a Networked Culture: Why Youth Heart MySpace, February 19, 2006, American Association for the Advancement of Science. 18 Aug 2006. http://www.pewinternet.org/PPF/r/184/report_display.asp Burns, Enid, U.S. Internet Adoption to Slow, 24 Feburary 2006. Clickz Network. 18 Aug 2006. http://www.clickz.com/stats/sectors/demographics/article.php/3587496 Fig 1. ---. Enid, Top U.S. Parent Companies and Stickiest Brands on the Web, May 2006. Clickz.com. 18 Aug 2006. http://www.clickz.com/stats/sectors/traffic_patterns/article.php/3617306 Byfield, Bruce, RIAA Conducting a Reign of Terror, 20 July 2006, NewsForge, 18 Aug 2006. http://trends.newsforge.com/trends/06/07/20/1651223.shtml?tid=147 Conhaim, Wallys W., Creative Commons Nurtures the Public Domain, 3 June 2002 NewsBreaks. 18 Aug 2006. http://www.infotoday.com/newsbreaks/nb020603-2.htm
49 Digital Rights Management, 13 march 2004, Wikipedia, see “History” for contributors.18 Aug 2006. http://en.wikipedia.org/wiki/Digital_Rights_Management Foucault, Michel, Foucault Reader, Rabinow, Paul, ed. New York: Pantheon books, 1994. Giles, Jim, Internet encyclopaedias go head to head, 15 December, 2005Nature International journal of science,18 Aug 2006. http://www.nature.com/nature/journal/v438/n7070/full/438900a.html Gonsalves, Antone, Social Networks Attract Nearly Half Of All Web Users, TechWeb, 12, May 2006, 2006 http://www.techweb.com/wire/ebiz/187202833 Horrigan, John, B., Home Broadband Adoption, 28, May, 2006, Pew Internet and American Life Project, 18 Aug 2006. http://www.danah.org/papers/AAAS2006.html Inside Outsourcing, Tuck School of Business at Dartmouth, Anon, 18 Aug 2006. http://www.tuck.dartmouth.edu/news/features/outsourcing.html Internet Systems Consortium, http://www.isc.org/index.pl, 18 Aug 2006. Lessig, Lawrence, Free Culture, New York: Penguin Press, 2004. Litigation, Information Site For The, Compact Disc Minimum Advertised Price Antitrust, Settlement, 19, February, 2004, 18 Aug 2006. http://www.musiccdsettlement.com/english/default.htm Marx, Carl, The Marx-Engels Reader, Tucker, Robert C., Engels Friedrich, eds., New York: W.W. Norton and Company, 1972. Nielsen/Netratings Ratings, Three Out of Four Americans Have Access to the Internet, 18 March, 2004, 2006. http://126.96.36.199/search?q=cache:2x3yOAo3wlkJ:www.nielsennetratings.com/pr/pr_040318.pdf+percent+of+americans+have+internet&hl=en&gl=us& ct=clnk&cd=4 Oberholzer-Gee, Felix, Music Downloads: Pirates—or Customers?, 21 June 2004. Working Knowledge, Harvard Business School, 18 Aug 2006. http://hbswk.hbs.edu/item/4206.html Koman, Richard, Supreme Court Decides Unanimously Against Grokster 27 June, 2005, O’Reilly Digital Media, 18 Aug 2006.
50 http://www.oreillynet.com/digitalmedia/blog/2005/06/supreme_court_decides_unanimou. html Lenhart, Amanda, Madden, Mary, Teen Content Creators and Consumers: More than half of online teens have created content for the internet; and most teen downloaders think that getting free music files is easy to do. 11 Feb 2005, Pew Internet and American Life Project. 18 Aug 2006. http://www.pewinternet.org/PPF/r/166/report_display.asp Lewis, Oliver, Teenage Life Online: The rise of the instant-message generation and the Internet's impact on friendships and family relationships, 21 June 2001, Pew Internet and American Life Project. 18 Aug 2006. http://www.pewinternet.org/PPF/r/36/report_display.asp Love, James, [IPN] ABA board of governors declines to endorse 95 year corporate copyright term, 12 April, 2002 American Bar Association, 18 Aug 2006. http://legalminds.lp.findlaw.com/list/info-policy-notes/msg00142.html Lunenfeld, Peter, Demo or Die, 30 July 1998, Nettime.org, 18 Aug 2006. http://www.nettime.org/Lists-Archives/nettime-l-9807/msg00085.html Media Family The: Electronic Media in the Lives of Infants, Toddlers, Preschoolers and Their Parents, 24 May 2006, The Kaiser Family Foundation, 18 Aug 2006. http://www.kff.org/entmedia/7500.cfm Metro Goldwyn- Mayer Studios Inc.v Grokster 04-480. 27 6 2005, Supreme Court United States. 18 Aug 2006. http://www.supremecourtus.gov/opinions/04slipopinion.html More Bandwidth at Lower Cost, The Partnership for Higher Education in Africa, 18 Aug 2006. http://www.foundation-partnership.org/pubs/bandwidth/index.php?chap=chap0&sub=c0g Nie, Norman and Hillygus, Sunshine, 2002. Where Does Internet Time Come From?: A Reconnaissance. IT & Society 1(2):1-20. http://www.stanford.edu/group/siqss/itandsociety/v01i02/v01i02a01.pdf tv/family internet time Online Holiday Shoppers Spent a Total of $30.1 Billion during 2005 Holiday Season, Up 30 Percent from 2004, According to the eSpending Report from Goldman Sachs, 29, December, 2005 Nielsen//NetRatings and Harris Interactive,18 Aug 2006. http://www.harrisinteractive.com/news/allnewsbydate.asp?NewsID=1002 P2P Download Statistics, People to people net. 18 Aug 2006. http://p2pnet.net/story/6541
Persisting Global Inequalities in Health and Well-Being, 2005, Population Reference Bureau, 18 Aug, 2006. http://www.prb.org/Template.cfm?Section=PRB&template=/Content/ContentGroups/Dat asheets/2005_World_Population_Data_Sheet.htm Pink, Daniel, “The Book Stops Here,” Wired, March 2005, p129. Pinker, Steven, The Blank Slate, The Modern Denial of Human Nature, New York: Penguin Books Ltd., 2002. Potier, Beth, File Sharing May Boost CD Sales, 15 April 2004, Harvard Gazette Archives 18 Aug 2006. http://www.news.harvard.edu/gazette/2004/04.15/09-filesharing.html Podcast, Wikipedia, see “History” for contributors http://en.wikipedia.org/wiki/Podcast Price Fixing Since 1996 Caused CD Sales Slowdown, 12 April, 2002, Blogaritaville, 18 Aug 2006. http://scriban.com/movabletype/2002_04_12.html Program of Science, Technology. America, and the Global Economy. Outsourcing America: What's Behind Our National Crisis And How We Can Reclaim American Jobs. 07 June 2005, Woodrow Wilson International Center For Scholars. 18 Aug 2006. http://www.wilsoncenter.org/index.cfm?topic_id=1408&fuseaction=topics.event_summa ry&event_id=128774 QuoteDB, 18 Aug 2006. http://www.quotedb.com/quotes/3500 Redshaw, Kerry, Gottfried Wilhelm Leibniz (1646 - 1716). 18 Aug 2006. http://www.kerryr.net/pioneers/leibniz.htm Seigenthaler, John Sr: Wikipedia biography controversy, post date 29 May 2005. see history for a partial approximately (500) contributors (unblocked only) contributors are listed in monikers only, 18 Aug 2006. http://en.wikipedia.org/wiki/John_Seigenthaler_Sr._Wikipedia_biography_controversy Selvin, James, The Internet and Society, Cambridge: Polity Press, 2001. Schuler, Doug, The Cyber-Knave Conspiracy, a Rant of Our Times, 14, June, 1999 Telepolis. 18 Aug 2006. http://www.heise.de/tp/r4/artikel/2/2944/1.html
52 Shklovski, I., Kiesler, S., Kraut, R. E. The Internet and Social Interaction: A Metaanalysis and Critique of Studies, 1995-2003. p.783,785 Kraut, M. Brynin, and S. Kiesler (Eds). Domesticating Information Technology. Oxford University Press, 18 Aug. 2006. http://homenet.hcii.cs.cmu.edu/progress/research.html Statatistics Related to Outsourcing, Realtime Technology Solutions, 18 Aug 2006. http://www.rttsweb.com/outsourcing/statistics/ Taylor, Humphrey, ON-LINE POPULATION SPENDS AN AVERAGE OF SIX HOURS ON THE INTERNET OR WEB PER WEEK, 24, March, 1999, THE HARRIS POLL. 18 Aug 2006. http://www.harrisinteractive.com/harris_poll/index.asp?PID=58 Valovic, Thomas, S., Digital Mythologies: Hidden Complexities of the Internet, p.56 , New Brunswick: Rutgers Press, 2000. Wardrip-Fruin, Noah, Nick Monfort eds, The New Media Reader, Cambridge: The MIT Press, 2003. Winter, Sarah, Research Dollars Recognized, May 10, 2006. Daily Bruin. UCLA. 18 Aug 2006. http://www.dailybruin.ucla.edu/news/articles.asp?ID=37119 Weizenbaum, Joseph, ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine, 1 Jan 1966. Mass institute of Technology, Dept of Electrical Engineering, 18 Aug 2006. http://i5.nyu.edu/~mm64/x52.9265/january1966.html Xizer (moniker), Jack Thompson threatening to sue Wikipedia, March 2006, Digg.com, 18 Aug 2006. http://digg.com/tech_news/Jack_Thompson_threatening_to_sue_Wikipedia
Published on Nov 13, 2008
Published on Nov 13, 2008
Part I shows how information technology (IT) has overwhelmed the employment dynamic in the international economy. Part II shows anecdotally...