Issuu on Google+


Volume 2, Number 3 Technology, Economy, and Standards October 2009 Editor

Jeremiah Spence

Guest Editors

Yesha Sivan J.H.A. (Jean) Gelissen Robert Bloomfield

Reviewers

Aki Harma Esko Dijk Ger van den Broek Mark Bell Mauro Barbieri Mia Consalvo Ren Reynolds Roland LeGrand Vili Lehdonvirta

Technical Staff

Andrea Mu単oz Kelly Jensen Roque Planas Amy Reed

Sponsored in part by:

The Journal of Virtual Worlds Research is owned and published by:

The JVWR is an academic journal. As such, it is dedicated to the open exchange of information. For this reason, JVWR is freely available to individuals and institutions. Copies of this journal or articles in this journal may be distributed for research or educational purposes only free of charge and without permission. However, the JVWR does not grant permission for use of any content in advertisements or advertising supplements or in any manner that would imply an endorsement of any product or service. All uses beyond research or educational purposes require the written permission of the JVWR. Authors who publish in the Journal of Virtual Worlds Research will release their articles under the Creative Commons Attribution No Derivative Works 3.0 United States (cc-by-nd) license. The Journal of Virtual Worlds Research is funded by its sponsors and contributions from readers. If this material is useful to you, please consider making a contribution. To make a contribution online, visit: http://jvwresearch.org/donate.html


Journal of Virtual Worlds Research Volume 2, Number 3 October 2009 “Technology, Economy, and Standards” ISSN: 1941-8477 Table of Contents • Overview: State of Virtual Worlds Standards in 2009 o Yesha Sivan, Shenkar College of Engineering, Design & Metaverse Labs. Ltd. • Immersive 3D Environments and Multilinguality: Some Non-Intrusive and Dynamic e-learning-oriented Scenarios based on Textual Information o Samuel Cruz-Lara, Nadia Bellalem, Lotfi Bellalem and Tarik Osswald, Nancy-Université, LORIA / INRIA Nancy Grand-Est • Supporting Soundscape Design in Virtual Environments with Content-based Audio Retrieval o Jordi Janer, Nathaniel Finney, Gerard Roma, Stefan Kersten, Xavier Serra, Universitat Pompeu Fabra, Barcelona • Payback of Mining Activities Within Entropia Universe o Markus Falk, Inova Q Inc., Daniel M. Besemann, Hamline University and James M. Bosson, Active Capital Management Ltd. • Order and Creativity in Virtual Worlds o Evan W Osborne & Shu Z Schiller, Wright State University • Standardization in Virtual Worlds: Prevention of False Hope and Undue Fear o Marco Otte and Johan F. Hoorn, VU University Amsterdam • Content Level Gateway for Online Virtual Worlds o S. Van Broeck, M. Van den Broeck and Zhe Lou, Alcatel-Lucent, Belgium • Machine Ethics for Gambling in the Metaverse: An “EthiCasino”


o Anna Vartapetiance Salmasi and Lee Gillam, University of Surrey, UK • On the Creation of Standards for Interaction Between Robots and Virtual Worlds o Alex Juarez, Christoph Bartneck and Lou Feijs, Eindhoven University of Technology • Virtual Chironomia: Developing Non-verbal Communication Standards in Virtual Worlds o Gustav Verhulsdonck, New Mexico State University o Jacquelyn Ford Morie, University of Southern California • An Experiment in Using Virtual Worlds for Scientific Visualization of SelfGravitating Systems o Will Meierjurgen Farr, Massachusetts Institute of Technology; o Piet Hut, Institute for Advanced Study; o Jeff Ames, Adam Johnson, Genkii • Measuring Aggregate Production in a Virtual Economy Using Log Data o Tuukka Lehtiniemi, Helsinki Institute for Information Technology • Another Endless November: AOL, WoW, and the Corporatization of a Niche Market o Ray op'tLand, University of Calgary • Piracy vs. Control: Models of Virtual World Governance and Their Impact on Player and User Experience o Melissa de Zwart, University of South Australia • Synthetic Excellence: Standards, Play, and Unintended Outcomes. o D. Linda Garcia and Garrison LeMasters, Georgetown University • Virtual Worlds, Collaboratively Built o Philip Rosedale, Linden Lab • Barriers to Efficient Virtual Business Transactions o ArminasX Saiman, Virtual Business Owner/Operator • The Role of Interoperability in Virtual Worlds, Analysis of the Specific Cases of Avatars o Blagica Jovanova, Marius Preda, Françoise Preteux, Institut TELECOM / TELECOM SudParis, France


• Universal Design: Including Everyone in Virtual World Design o Alice Krueger, Ann Ludwig and David Ludwig, Virtual Ability, Inc. • Real Standards for Virtual Worlds: Why and How? o Kai Jakobs, RWTH Aachen University • Virtual World Interoperability: Let Use Cases Drive Design o Jon Watte, Forterra Systems • Lindman Design: Virtual World Experiences o Ludvaig Lindman, Lindman Design • Introduction to MPEG-V o Jean H.A. Gelissen, Philips Research • World of Bizcraft o Robert Bloomfield, Cornell University


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Editor’s Corner Overview: State of Virtual Worlds Standards in 2009 By Yesha Sivan, Shenkar College of Engineering, Design & Metaverse Labs. Ltd.

Abstract This paper serves as an introduction to the special issue of the Journal of Virtual Worlds Research on Technology, Economy, and Standards. It starts with a set of assumptions about the nature of virtual worlds, their potential, and the role of standards play within them. The second section includes a more detailed discussion about the definition of virtual worlds (as an integration of four factors of 3D, Community, Creation, and Commerce, aka 3D3C). The third section covers a general framework for standards: the five dimensions standards (Level, Purpose, Effect, Sponsor, Stage) as well as a reflective discussion about the origin and value of such a framework. The forth section connects standards with virtual worlds, advocating a stacked approach to standards and components. The sixth section includes a review of, and an invitation to read, the papers in this issue. Keywords: virtual worlds; 3D3C; dimensions of standards; MPEG-V.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 4

Editor’s Corner Overview: State of Virtual Worlds Standards in 2009 By Yesha Sivan, Shenkar College of Engineering, Design & Metaverse Labs. Ltd. As a fast introduction to the topic of “standards for virtual worlds,” imagine you are a shirt designer, and then consider these four questions: 1. Can I really make money by selling virtual shirts to my friends? 2. When can I make money by selling virtual shirts to my friends? 3. What needs to occur before I can make money from selling shirts? 4. What specific standards do we need to allow me to make money from selling shirts? The first question is about the nature of virtual worlds. It is a general question asked by many as they first hear about virtual worlds. The idea of making real money from selling virtual items is daunting—almost scary. Yet, this idea of virtual goods is usually a low hurdle. Examples like virtual music (you buy a virtual song in iTunes), virtual movies (you rent a movie from Netflix), and paying a subscription for games (like World of Warcraft), really make the idea of virtual shirts very concrete as goods that are digital in nature and are not tangible. In fact, we are surrounded by virtual goods in the form of services or digital products. What’s more, many of the real products that we make, or get, include virtual components (consider mobile phone where the cost of the physical telephone is relatively low compared to the cost of the services we get). And this is only the beginning; the potential for virtual worlds in health, entertainment, learning, and many other fields is immense. The second question is about timing. Once you appreciate the potential of virtual worlds and the potential benefits in a variety of fields, it is natural to ponder when this potential will become reality. This is an important question for people who want to use virtual worlds for real results (that is beyond test, experimentation, pilot, proof of concept, etc.) The adoption of full virtual worlds such as Second Life is a good example. I estimate that by mid-2009 we have about one million users visiting Second Life at least once a month. Many of them (I estimate 200,000) will probably spend about a few hours a week. However, compared with web based experiences like Twitter, Facebook, or LinkedIn these are small numbers (FaceBook reached 300 million users just recently). Currently, churn is major challenge: many people try virtual worlds, very few stay. People are excited to start, but simply do not stay or continue to gain value from virtual worlds. We are clearly just starting. The third question is about the missing components. If the potential of virtual worlds is big, yet current use is limited, what components we need to move forward? What barriers prevent users from using virtual worlds effectively, and what barriers prevent organizations from using virtual worlds? The answer is probably a combination of factors—mostly technological, some social. To expand, we need server power, client power, bandwidth, and easier interfaces. We need users that can overcome the interfaces and move to engage in virtual worlds, as well as organizations, and creators that know how to create virtual worlds engagements for these users. The fourth question focuses on one key answer: Standards. The claim of the forth question is that standards are a (if not “the”) key for the long term prosperity of virtual worlds. Successful media of the past like the telephone, the television, mobile phones, and of course the 4


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 5

internet, are all based on standards. Virtual worlds, as a new medium, will also need to be based on common standards. The search for the right standards, along with the work of establishing them, evolving them, and causing their adoption, is a long term effort. Behind common media we always have standards doing their regulatory work. The internet, web and mobile phones are all supported by behind-the-scenes teams that worry, care, argue, fight, bribe, beg and smile to arrive at common standards. The works of IETF, W3C, and GSM teams (the main organizations that manage the internet, web, and mobile industries respectively) is full of exotic meeting sites, endless documents writing, balloting, and lots of acronyms like TCP/IP, XML, and 3G. This special issue of the Journal of Virtual Worlds Research is part of the effort to explore the fields of standards for virtual worlds. Working on such standards is a process that is both a technical and conceptual. At the risk of being too technical, let me share one helpful example. Last year (2008), as part of a call for technical inputs, I have received numerous inputs, specifically from Sun Wonderland (https://lg3d-wonderland.dev.java.net/), Web3d (http://web3d.org/), Openmetaverse (http://www.openmetaverse.org/), and various business people in virtual worlds (“merchants”). Following some more inputs I also had to look more closely at various building blocks such as OpenID (http://openid.net/) and Collada (http://www.collada.org). Such players, terms, technologies and agendas are part of the background. As we delved further, we were able to identify many good practical and theoretical endeavors. Such endeavors enhance, explicate, and analyze various aspects of standards and virtual worlds. We were looking to establish a stage for these efforts, and as a result, this special issue was born. The Journal of Virtual Worlds, founded by Jeremiah Spencer, has established itself as a leading authority in the field. By focusing solely on virtual worlds, it was able to assemble latest research, theory, and practice pertaining to this relatively unexplored area of enterprise and thought. Past issues dealt with Virtual Worlds Research: Past, Present & Future; Consumer Behavior in Virtual Worlds; Cultures of Virtual Worlds; Pedagogy, Education and Innovation in Virtual Worlds; and 3D Virtual Worlds for Health and Healthcare. The potential of virtual worlds is engagingly described on the virtual pages of the journal. This issue was designed to give voice to leading theoretical and practical players working in the realm of standards. We specifically chose to emphasize the disciplines of economy and technology as critical harbingers to the endeavor of standards. Together with my co-editors Robert Bloomfield (from the Johnson Graduate School of Management at Cornell University) and Jean H.A. Gelissen, (from Philips Research), we looked to explicate some of the deeper corners of the field. In our call (http://www.metaverse-labs.com/tes), we invited, and received, scholarship pertaining to topics such as: • • • • • • •

Specific standards or families of standards that can impact virtual worlds Economic analysis of specific standards for specific firms Discussion on Privacy, Authentication, and related issues (for example Open ID) Legal aspects of virtual worlds that can be set in the technical specifications Review of relevant technology platforms—along with their pros and their cons Case studies of large scale standardization efforts (Windows, Linux, GSM) and the lessons learned from them that can be applied to the development of virtual worlds Visions of the virtual world’s universal access system (network and station) 5


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 6

• • • • • • • • •

Comparing related terms such as working code, for and not for profit efforts, open sources, and formal systems Key places where standards matter (looking for the mouse and windows of virtual worlds), in other words the interfaces with the real (physical) world Economic analysis of various externalities in the field Winning stories of standards in the field (be it private, public, open, etc) Example of wrong standards, failed standards, and other things to learn from Short Term Winnings (VRML) vs. Long Term Value Evaluation of whether we need to add to current standards so that they will be used in virtual worlds (ISBN 3D, OpenID3D, etc.) The impact of open standards on close systems (Android); the impact of propriety technology (iPhone) The connection between various legal formats (GPL, LGPL) and new technologies (i.e., Grid/cloud for virtual worlds)

As editors we specifically encouraged short papers on specific examples (past, present, or future). We asked the authors to assume the readers will be well-versed in various aspects of virtual worlds, but not necessarily economy, technology or standards. Thinking about standards for virtual worlds is a daunting task. The goal of the issue is to present possible methods of thinking, rather than construct an exhaustive review of standards. There are many forces at work in this area—many competing technologies, business models, and personal, corporate and public interests. However, our goal for this issue is modest: to enhance and deepen the discussion about standards for virtual worlds. Let’s establish a few assumptions for this issue as we have shared them with the authors: 1. Virtual worlds are destined to become “big.” That is, big in the sense of meaningful, influential, and lucrative for various current and new players. Every aspect of our lives will be affected by virtual worlds. Beyond being simply another media, virtual worlds will be part of our regular lives, and they are going to enhance, improve, and better our quality of life. Much like the internet, virtual worlds will allow us to do “traditional” things more effectively, and try out entirely new things as well. 2. Real virtual worlds are defined as an integration of four factors (AKA 3D3C), This is a 3D view of the world, Community, Creation, and Commerce (AKA 3D3C). The more we have of these factors the closer we get to real virtual worlds. In that sense IMVU, Second Life, and Entropia are more real virtual worlds than Club Penguin, World of Warcraft, and SIMS on-line. This preliminary perspective (“3D3C”) to virtual worlds will be covered in the second section of this introduction paper. 3. “Standards” as a concept and mechanism are often misunderstood. People often link standards with competing concepts: open and free on the one hand and propriety patents, limitation of creativity on the other. Like many other human constructs, standards are not inherently good or bad – what you do with a standard gives them their positive or negative value. One overarching framework to standards called “the five dimensions of standards” will be covered in the third section of this paper.

6


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 7

4. A stacked approach is better then a monolith approach. Currently the virtual worlds industry operates more like the Computer Gaming Industry than like the internet industry. Each developer, be it private (e.g., Linden, Forterra) an open source (e.g., Sun Darkstar, OpenSim) develops their own server, client, and rules of engagement. The inherent rationale of these efforts is a combination of “we know best” and “we will conquer the world.” While this may be the case (see Microsoft Windows, Apple iPod, or Google search), I believe the common public good calls for a connected system like the internet, where different forces can innovate in particular spots of the value chain. I will be happy if one firm or organization succeed to capture the virtual worlds market and allow it to blossom (MS-Windows, for example, facilitated the entire PC industry, Google is now doing it for Grid computing, and Apple has is destined to our relations with mobile devices). Yet, I think, the virtual world market is too complex and long term in nature for one firm. Thus a more inclusive model (like the internet) may be more appropriate. The call for stacked approach is developed further in the forth section of the paper. 5. Market today (2009): many Players, one leader. There are many players in the field – all with various goals and takes on the field. Some of these players may have a direct and meaningful contribution to make. Currently the Open-Second Life ecosystem has potential to turn into the standard. The co-operation between Linden and Open source work seems to advance the state of the art. Yet, some voices look at this endeavor as Linden’s attempt (planned or not planned) to stall the larger goal of standards. Standards are not always about technical value; they are more often about business models. I have asked the Second Life’s founder and de-facto leader (also known as god of gods) to share his perspective on standards and the world of Second Life. 6. My personal take. This work is part of an effort to build a community around standards for virtual worlds. I have started this work with the EU based Metaverse1 consortium which includes about 30 organizations mostly based in Europe to set “global standards between real and virtual worlds.” This work will feed into the MPEG-V (Moving Picture Experts Group Virtual Worlds Standard). The MPEG group is part of the International Standards Organization (ISO). This starting point will be covered by Jean Gelissen in his paper about the MPEG-V effort. 7. We are just starting. The efforts to develop standards for virtual worlds are just starting. It will take time. At this point, we are defining the path. We have a long way to go. We now move to the expanded definition to virtual worlds. Introduction to Virtual Worlds: Integrating 3D3C Virtual worlds are an emerging medium that is constantly creeping into our lives. Following the success of such gaming worlds as World of Warcraft, The Sims and others, terms like 3D, avatars, chat and real money are rising. For individuals, new forms of interactive entertainment, mostly social, are also pushing virtual worlds. For the enterprise, the drive to save travel costs and the need to gain new customers and retain current ones push this trend even further (Murugensan, 2008).

7


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 8

I maintain that real virtual worlds will, eventually, offer a paradigm shift. What we see now with Second Life, World of Warcraft, Club Penguin and more then 100 other worlds, is just the beginning. In comparison to the Internet age, we are at the “Gopher” stage (Gopher was a prebrowser method to view hyperlinked data). This budding arena of real virtual worlds has its roots in two fields: virtual reality (Burda & Coiffet, 2003) including augmented reality (Bimber & Raskar, 2005) and gaming worlds (Bartle, 2004; Alexander, 2003; Alexander, 2005; Taylor, 2006). Other related fields also affecting virtual worlds include but are not limited to economy (for example, of virtual goods), sociology (nature of communities), law (copyrights and ownership), biology (new brain based human-computer interfaces), computer science (performance, reliability and scalability) and mathematics (algorithms for 3D rendering and animation). I use the adjective “real” to distinguish between virtual worlds and gaming worlds. “Real” implies a potential reaching further than imagined today . While today’s virtual worlds are clearly used mostly for fun and games, real virtual worlds have the capacity to alter our lives. (Note: for the sake of brevity, I will onward use virtual worlds or simply worlds.) I define real virtual worlds as an aggregate of four factors: A 3D World – A three dimensional representation, that is viewable from various perspectives, it is active, and reactive. In a virtual world viewers can see objects like avatars, houses, and cars. The world has land, a sky, a sun (maybe more than one), wind, gravity, water, and fire. Avatars move around freely, and the user can examine the world from different points of view. Further, the world is active (including moving objects), and reactive (objects can act in a similar way as they do in the physical world). 1.

2.

3.

Community – Set of tools that allow communities to operate (including groups, sub groups, permissions, leadership, friends, etc). Virtual worlds allow users, via their avatars, to meet, chat, shop, watch performances, hang out with friends, team up to fight bad guys, go clubbing ... in other words, to interact in countless ways. Within “community,” I include related concepts such as groups, permissions, rights, and roles. Creation – Set of tools that allow users to create in-world, or import content from the real world. Creation includes actions such as arranging, creating, repurposing, and performing. Creation refers to both objects and services. Second Life’s (SL) greatest technological achievement was giving users the capability to develop their own objects in world interactively. Users can simply move preconstructed objects from one place to another (say, to furnish a home or set up a nightclub), or they may assemble an object (e.g., a house) from basic components, such as walls and ceilings, and then “paint” them with various textures. SL’s programming language, Linden Script Language, even allows users to program behavioral attributes for their objects, so that fish can swim in schools, golf balls can arc through the air, guns can shoot, and people can dance (as the script activates the animation). Commerce – The ability to connect with real money, including payment, transfer of funds from one object/player to another, and facility to transfer money between the virtual world to the real world. As an example, SL’s maker, Linden Lab, has created the Linden Dollar (L$), which has a defined exchange rate with the US dollar 8


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 9

(one US$ fluctuates around L$260). This L$ currency is the base for the economy of SL. You can exchange L$ to US$ immediately and at any time at the Linden Exchange. For instance, if you earn L$2,600 from tips, you could exchange them for about US$10, which would be immediately transferred to your real PayPal or bank account. Going the other way, if you need L$5,200 for a new car, you could immediately buy them for about US$20. Ultimately, real virtual worlds arise from the integration of 3D, Community, Creation and Commerce. SL reveals the emergence of this integration (and thus I, like others, use this specific world as the primary example of virtual worlds). In SL you will find prices for objects, permissions (i.e., an object may be restricted from being sold), and ownerships. Commerce is embedded into the world. For example, let us assume that we enjoy Beth’s singing (Beth is a real world singer that performs from time to time in SL) and wish to tip her. We point to her and transfer money by clicking a button. If Beth wants to buy a new blouse, she goes to a shop, points to the blouse of her choice and buys it for L$2,000. The blouse is as a unique object in this world, and Beth will not be able to copy it. The shopkeeper will receive L$500 for the blouse, and the blouse manufacturer will receive L$1,500 (in accordance with a previously defined business agreement between them). At the end of the month, the shopkeeper will pay rent to the landowners, also based on a predetermined agreement. Second Life is not the only virtual world with a thriving “real” economy. The Entropia Universe also has a cash-based economy (with a fixed rate of 10 “PED” to one US$), and its maker, MindArk PE AB, has even received preliminary approval for an actual banking license by the Swedish Finance Supervisory. This would allow its users to conduct real-world banking transactions from within the Entropia Universe. (Thompson, 2009). IMVU is another example. This integration of a 3D world, organized and managed communities, immediate creation capabilities of objects and services, and a virtual commerce which actually becomes real, is the basic allure of SL in particular and of real virtual worlds in general Introduction to Standards: The Five Dimensions From Industrial Age to Knowledge Age Standards Almost every aspect of our life is supported and often shaped by standards. Consider, for example, the work you are now reading. It has a table of contents (a common standard for quick access); it has page numbers (another quick access device); it uses a standard language, a standard font, and a standard paper size. In the making of this work, both directly and indirectly, I have used dozens of other standards, among them: the Postscript page description language, the Internet, the Harvard online library system, the QWERTY keyboard, the Microsoft Word program, and many more. Assume that you are sitting in a typical kitchen of a typical home anywhere in the industrialized world. Look around you. All electrical appliances share the same electric current. You need a fan? Move it from another room, plug it in, and enjoy the cool breeze. Want some music? No problem; grab any CD and a CD player, put the CD in the player, press “play,” and enjoy the sounds. Notice that you can take any CD from any vendor and replay it in any other CD 9


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 10

player, anywhere in the world (legal standards, burned into a technical standards, will not allow you to do so with DVDs, given that that have regions). Assume that you are in a car. Look around you. First and foremost you are face with the issue of fuel, which you can get in any fuel station down the street. Look at your tires. You can choose any kind, as long as they match the standard specifications of your car. Consider the plate number, registration, and mandated insurance, or the traffic signs, directional lights, emission standards, and the radio. All involve some standards. What were the roles standards played in the industrial age? Well, as the above examples suggest, standards played diverse roles. One researcher suggested the following laundry list: A standard is a formulation established verbally, in writing or by any other graphical method, or by means of a model, sample, or other physical means of representation, to serve during a certain period of time for defining, designing or specifying certain features of a unit or basis of measurement, a physical object, an action, a process, a method, a practice, a capacity, a function, a duty, a right, a responsibility, a behavior, an attitude, a concept or a conception, or a combination of these, with the object of promoting economy and efficiency in production, disposal, regulation and/or utilization of goods and services, by providing a common ground of understanding among producers, dealers, consumers, users, technologists and other groups concerned (Verman, 1973 and the original Gaillard, 1934). While comprehensive, this list has no zest, charm, or appeal. Such a definition often deters people because it does not provoke them in any meaningful way. What I needed personally, and what I felt others need in order to embrace the concept of standards, is a strong evocative image that will capture the critical facets of the phenomenon of standards. To grasp fully the scale of the change in the roles of standards, one must grasp the changing nature of standards themselves in the last 2-3 decades—a change that will be intensified in the years to come. The standardization community, which includes the private, national, and international bodies that produced the standards in the industrial age, has to adapt itself to the new roles of standards in the knowledge age. For example, the International Standards Organization (ISO), in their report A Vision for the Future: Standards Needs for Emerging Technologies (1990), claims that traditional industrial-age innovation followed the linear sequence from scientific discovery to applied research and development, followed by production and marketing. This linear sequence, according to the ISO, “must now be seen as a series of concurrent interactive processes.” As a result, the report calls for structural changes in the setting of international standards. This means that while in the industrial age one first created a product and then standardized it, in the knowledge age one often needs the standards before the products. Also, in many cases, especially in information technology industries, compatibility with previous standards is a necessary condition even to enter the market. In another example, the U.S. Congressional Office of Technology Assessment (OTA), in its report Global Standards: Building Blocks for the Future (1992), claims that the “emergence of a global economy in which the United States no longer plays the predominant role” will call for more and different global standardization. The report also discusses other aspects of standards

10


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 11

in the knowledge age, such as the growth of international standardization efforts and the effect of multinational organizations. IT related standardization is shifting. The recent (2008) spat between IBM led OpenDoc and Microsoft ECMA OpenXML (Blind, 2008) has exposed – again – the tension that this process generates. I assume that tension means value. In that regard see IBM Standards on Standards provide a good overview to the lasted industry unease with the process. The rise of the open source processes, including strong competing players, calls for more transparency and web tools to build standards. Therefore, the sheer complexity of knowledge based standards is a mounting challenge. Origin and Nature of the Dimensions Framework Before I could actually start developing the framework (for the full discussion see Sivan, 2000), I had to figure out a good format for it. Luckily, early in my journey, I found what seemed to be a good candidate. This format was published in Lal Verman’s 1973 seminal work Standardization: A New Discipline. In his book, Verman, who was the Director General of the Indian Standards Institute from 1947 to 1955, proposed a three-dimensional standardization space as a “logical means of presenting standardization.” Verman’s approach to mapping the concept of standards can be best demonstrated by using a simplified example. Suppose we want to understand the concept of “shirts,” as discussed earlier. According to Verman, we first have to find the three major dimensions, or attributes, of shirts. For the sake of the example, let’s say that these are the dimensions of color (categories include: black, white, red, yellow, and blue), kind (categories include: fun shirt, work shirt, evening shirt), and size (categories include: small, medium, and large). Then, following Verman, we arrange these dimensions in a three-dimensional space. Each point in the space represents a potential question that one can ask about shirts. For example, who uses a black, long-sleeved, fun shirt? Or, what can we say about work shirts in terms of color or kind? (Note that the dimensions generate questions and not answers.) Verman himself explained that the three-dimensional space should not be taken in its strict mathematical sense, but more as a way to look systematically at the phenomenon of standards. He also suggested adding more dimensions, which go beyond the spatial representation of the three dimensions. To continue with our shirts example, we can add, as a fourth dimension, the shape of the shirt (categories include: long sleeves, short sleeves, has buttons, has pockets). In general, frameworks like the one proposed by Verman, which attempt to classify a concept systematically, are often used to create a shared map for a concept. Like other maps, they model a complex concept by capturing some of its important dimensions. Their main purpose is to “serve as instruments of understanding,” which they achieve by highlighting the critical dimensions of the land. As with the other frameworks, models, and maps, which assist in describing and analyzing their respective domains, a framework of standards should create a common vocabulary, and thus assist in describing and analyzing the domain of standards. The preliminary research also raised again the inherent pitfalls of such a framework. Like all maps, a dimensional framework has limitations. Not only can it highlight only parts of the terrain, it may also distort some of the terrain’s features. Like the blue line on a map that marks a 11


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 12

river that may be dry, certain dimensions that the framework describes in a particular way may look quite different in the real world. In the same way that it is not possible to capture the true color of every river, it is impossible to capture the actual meaning of each dimension in the real world. After all, a map is just a map, and it is not the actual land (Kent, 1978). The Five Dimensions The principal result of this work is a framework for standards, which has five dimensions. Each dimension has five categories, which together explicate the dimension. Table 1: Summary of the Five Dimensions of Standards Dimension 1: Level

Dimension 2: Purpose

Dimension 3: Effect

Dimension 4: Sponsor

Dimension 5: Stage

Individual

Simplification

Constructive

Devoid

Missing

Organizational

Communication

Positive

Nonsponsored

Emerging

Associational

Harmonization

Unknown

Unisponsored

Existing

National

Protection

Negative

Multisponsored

Declining

Multinational

Valuation

Destructive

Mandated

Dying

Source: Sivan, 2000 Box 5.2 – “Summary of the Five Dimensions”

The framework can be best illustrated by showing how the five dimensions work in a real context. So, for the purpose of this overview, I would like to give you a taste of the framework. I’m well aware that at this point some of the categories probably look cryptic (i.e., Harmonization) or even totally unclear (i.e., Unisponsored). Still, even at this early stage, I believe it is possible — and important — to give you a taste of the generality, utility, and potential value of the framework. Our goal in this overview is to taste the nature and value of the framework while acknowledging these yet-to-be-explained categories. I say “our” and “we” because you, the reader, will also have an active part in this overview. Together, by me asking questions and you giving answers, we will examine the five dimensions of the framework by applying it to a concrete example. First, I ask you to spend a few seconds selecting a standard that particularly interests you. You can use any of the standards that I have presented in the introduction or ones that you see or would like to see around you. You can choose the cable standard (say its short name is “Cable”), the standards for computer based characters (“ASCII”), the structure and size of credit cards (“Credit card”), tests like the Scholastic Aptitude Tests (“SAT”), or the fact that you need a tie in some restaurants (“Tie-in-a-restaurant”). Better yet, you may want to select a standard from your own setting. (You don’t have to spend too much time. In talks I have given about standards, I found that the first thing that comes into your mind will usually suffice.) In any case, make sure that you have a name for the standard, preferably a short name (up to four words is best). Then, in the following paragraphs, we will use the framework together to ponder about the Level, Purpose, Effect, Sponsor, and Stage of your standard. 12


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 13

The Level dimension will prompt us to think about the users and producers of the standard. For example, if you chose the SAT standard, then the users are students (Levelindividual) and universities (Level-organizational), and the producer, one in this case, is the Educational Testing Service (Level-organizational). Who uses your standard? Is it used by individuals, organizations, or even nations or the entire world? Was it developed by one of the international bodies, or perhaps by an association of companies? Or was developed by a particular person? The Purpose dimension will prompt us to think about the aims, both intended and actual, of standards. For example, the “Tie-in-a-restaurant” standard is aimed at maintaining a respectful clientele and protecting those who want to get their money’s worth in terms of ambiance (Purpose-protection). What about your standard? Perhaps it was originally intended just to create vocabulary, or perhaps it was intended to protect consumers from potential harm. Some standards, and yours may be among them, were originally designed to support simplification, but later they were used to support protection. The Effect dimension will prompt us to consider the pros and cons, the benefits and problems, and the payoffs and tradeoffs that standards have. If you chose the Cable standard, then a payoff would be the diverse channels that we can now enjoy (Effect-positive) and the tradeoff would be the monopolistic system that the cable industry operates in (Effect-negative). What about your standard? For example, it may currently have positive Effects on one organization, but long-term negative, and perhaps even destructive, Effects on another organization. Or just the opposite; it may have negative Effects now, but constructive Effects in the future. We may also find that we know basically nothing about the Effects of your standard. The Sponsor dimension will prompt us to consider the origin of the standard. In the case of the credit card size, the sponsor is the International Standards Organization (Sponsormultisponsor). Who developed your standard? Can you identify it? Was it a single entity that is making lots of money off it? Or perhaps a not-for-profit coalition of many organizations? Is it a standard with a punishment attached to it, or just a recommendation? The Stage dimension will prompt us to think about the process of making the standard. For example, the ASCII standard is well established (Stage-existing), although there is some discussion about a extending ASCII to include non-Romance languages (like Arabic and Hebrew). What about your standard? Does it already exist? Is it widely used by many people? Perhaps its use is already declining, as its negative Effects overcome its positive Effects? The above brief mental experiment should give you a taste of the framework’s working. In essence, the five dimensions act as mental prisms. Like real prisms, which are used to break down and analyze light into its basic colors, the dimensions can be used to break down and analyze an object into its basic components. The object in question can be a particular standard, a setting, a view, or some other target of analysis that involves standards. In some cases, with certain objects, several categories or even whole dimensions will not be applicable. Yet, by having all five dimensions in our mental arsenal, we equip ourselves with a general tool. The price of this generality is the lack of applicability of some of the dimensions to some cases. This may explain why, in the above mental experiment, you might have found that particular dimensions did not relate to your selected standard. Aimed with the framework, we can now turn to examine the potential of standards when it comes to virtual worlds, and specifically their role in enabling innovation: 13


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 14

The Promise of Standards for 3D3C Virtual Worlds: In Praise of the Stacked Approach I just got new 3D goggles (Vuzix iWear VR920 3D goggles for $400). This relatively inexpensive device allows you to view a virtual world by simply turning your head around. When you look up, you see the sky, when you look down you see your legs (your avatar’s legs). When the item arrived, I had to install a special driver for Second Life. Even then, it did not work with the latest version of Second Life – which means an older version had to be installed (not a simple task if Second Life has mandated the latest version). Furthermore, it did not work with IMVU nor with Sun’s Darkstar/wanderland. In contrast, almost any computer screen that you connect to a computer works. Any mouse works by simply plugging it in. Standards mean better connectivity, ease of use (no need to install, follow versions etc.) More so, standards mean more users will buy the 3D goggles and prices could go down to perhaps $200 or $100. Once standards are common, maybe other firms will find it lucrative to go in – thus raising competition, lowering cost and gaining features and quality (which may not such good news for Vuzix). This is the most important value of standards: Standards allow innovation in specific points of the value chain—innovation that we need if we want to arrive the full potential of virtual worlds. Often, the first example that comes to mind talking about to virtual worlds standards is the concept of “Travatar”, an avatar that allows you to travel from one world to another. The discussion about Travatars that travel from Second Life to World of Warcraft and back is hiding a much deeper issue. What I want is one avatar (maybe 2 or 3 avatars), all mine, all walking in worlds that share the same basic interface, basic creation tools, basic friends list, and basic commercial system. I want to use the money I make from selling songs in Second Life to buy space to hold meetings in Qwaq. I want to build a sword in Second Life and use it in World of Warcraft. I want the same sword to be used in a rehabilitation treatment for Parkinson patients. Standards do not mean uniformity. In the same manner that we have specialized web sites (Amazon, eBay, and YouTube) we will have special firms that deal with specific aspects of virtual worlds. These firms will compete on speed, cost, quality, service, and features. They could decide what to focus on. At this point all the firms have to develop all components – they all develop avatar technology, access, servers, clients, etc. The market is not efficient. Could you imagine having to use a different browser each time you need to go to eBay, or Amazon or CNN? People will not even start using the internet. This is the current case with virtual worlds. It is no wonder that Second Life, at one time, had 1 million new users a month – only to keep less than 5000 of them 6 months later? (I’m being generous here). Today virtual worlds use the monolith approach model. This model works for the gaming worlds (World of Warcraft, etc.). Each gaming firm develops its own stack. By controlling the client, the server and the rules of the world, the gaming firm used to gain value in terms of game play. In contrast, the Internet has a stacked approach with protocols (e.g., HTML, TCP/IP, DNS, Flash).

14


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 15

One key benefit to a stacked approach is enhancing “innovation points.” Each actor in the field can focus on specific points of the chain and innovate. One challenge: virtual worlds are much more complex than the internet (x 100) and more intertwined (avatars needs to wear clothing in different islands and still communicate with their friends). Delving Into the Details The papers in this issue look at the state of standards for virtual worlds via four points of view: Technology, Economy, Standards, and Use Cases. After reading this introduction, which sets the scene for all papers (authors were specifically asked to minimize introductory definitions and reviews), the reader could start exploring the issue from any point of view. My preferred perspective starts with specific technologies that facilitate healthy economies that lead to “good” standards, which in turn lead to valuable use cases. The Technology point of view demonstrates specific examples to places where standards are needed: •

Philip Rosedale reflects in his short paper “Virtual Worlds, Collaboratively Built” on the process and intention of past, current and future Second Life.

Alex Juarez, Christoph Bartneck, and Lou Feijs, discuss “Standards for Interaction Between Robots and Virtual Worlds.” They propose: creating a standard platform that enable the seamless interaction between these heterogeneous, distributed devices and systems.

Sigurd Van Broeck, Mark Van den Broeck and Zhe Lou, discuss “Content Level Gateway for Virtual Worlds.” They propose a solution to guard virtual worlds from counterproductive content in the form of 3D models, avatars, textures, animations, or any other type of content commonly used by virtual worlds.

Samuel Cruz-Lara, Nadia Bellalem, Lotfi Bellalem and Tarik Osswald, discuss in their paper “Immersive 3D Environments and Multilinguality” some NonIntrusive and Dynamic e-learning-oriented Scenarios based on Textual Information. Their paper includes a review of some of the leading standards for localization.

Jordi Janer, Nathaniel Finney, Gerard Roma, Stefan Kersten, and Xavier Serra discuss “Soundscape” aiming to framework under for the automatic sonification of virtual worlds.

Jon Watte presents his perspective to the development of virtual worlds: “Let Use Cases Drive Design.” His main claim: “serious” virtual worlds will be the initial market that drives true virtual world interoperability because of its particular needs.

The Economic point of view demonstrates diverse angles to virtual worlds: •

Tuukka Lehtiniemi, discusses “Measuring Aggregate Production” of virtual worlds. He proposes the concept of GUP (Gross User Product), with concrete data from EVE Online extensive log data collected by the operator.

Evan W. Osborne and Shu Z. Schiller, discuss “Order and Creativity” in virtual worlds. Guided by the economic modeling of order and creativity, they discuss two types of behavior, constructive and destructive, to provide some guidelines for establishing limitations on the freedom of action of virtual-economy participants. 15


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 16

Markus Falk, Daniel M Besemann, and James Bosson discuss “Payback of Mining Activities” focusing on the payback of mining activities within the virtual world Entropia.

Ray op'tLand discusses “World of Warcraft, AOL, and the Disneyization of a Niche Market.” The main trust is to look at virtual worlds (such as WOW), using the process of Disneyization, as proposed by Bryman (2004), occurs within four dimensions: theming, hybrid consumption, merchandising, and performativity.

The Standards point of view focuses on specific aspects of standards in general and in virtual worlds: •

Kai Jakobs cover in depth, in his seminal paper “Real Standards for Virtual Worlds - Why and How?” the necessary background to those who would like to pro-actively participate in the setting of standards for Information and Communication Technologies.

Marco Otte and Johan Hoorn, in their paper “Prevention of False Hope and Undue Fear” propose standards to measure people’s hopes and fears during online transactions and connect this to a decision support system that estimates the probability that the user’s expectations are right. They use theory development through the reconciliation of technology acceptance, hope formation literature, risk perception and problem solving.

Melissa de Zwart, in her paper “Piracy vs. Control: Models of Virtual World Governance and Their Impact on Player and User Experience,” claim that current models of governance of virtual worlds evolved from the Terms of Service developed by the virtual world content creators based upon intellectual property license models. Increasingly, however, virtual world providers now seek to accommodate both the needs and interests of owners and users in order to respond to the evolving needs of the virtual world.

Blagica Jovanova, Marius Preda, and Françoise Preteux discuss “The Role of Interoperability in Virtual Worlds, via the Analysis of the Specific Cases of Avatars.” They provide a detailed survey of research for avatar appearance modeling, deformation control, and animation.

In a related work, Gustav Verhulsdonck and Jacquelyn Morie, discuss in their short paper “Virtual Chironomia” the need for Non-verbal Communication Standards in Virtual Worlds.

Jean H. A. Gelissen, a co-editor of this issue, presents a short review of “MPEG-V.” MPEG-V (Media Context and Control), ISO/IEC 23005 is a new effort under ISO in the MPEG Working Group, the exact label is ISO/IEC JTC 1/SC 29/WG 11. MPEG is a deadline driven process (final deadline for MPEG-V is Oct, 2010 for publication of the ISO International Standard, IS).

16


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 17

Lastly, in a sharp and valuable critique, D. Linda Garcia and Garrison LeMasters, in their paper “Synthetic Excellence: Standards, Play, and Unintended Outcomes” provide us with some critical view about standardization of virtual worlds, with a special focus on MPEG-V. Their main point: “This [MPEG-V] is an alarming trend, which could give rise to a number of unfortunate and unforeseen consequences.”

The Use Case point of view demonstrates specific cases where standards are needed. •

Will Farr, Piet Hut, Jeff Ames, and Adam Johnson describe their “Experiment in Scientific Visualization of Self-Gravitating Systems.” They push to identity what should be defined as parameters of virtual worlds (e.g., Gravity), as well as what does it mean to “store” an experiment in virtual worlds.

Alice Krueger, Ann Ludwig, and David Ludwig, in their short paper “Universal Design: Including Everyone in Virtual World Design” challenge us to think about accessibility design within virtual worlds. Clearly, this is a place where standards could make a real difference.

Ludvaig Lindman (Real-Avatar®), one of the most creative content makers in Second Life, describes his “Virtual World Experiences” as a business person in virtual worlds.

Anna Salmasi and Lee Gillam, in their paper “EthiCasino: Machine Ethics for Gambling in the Metaverse” discuss the combined legal and ethical issues of gambling online and in virtual worlds, and discuss the construction and evaluation of a system with computational oversight: an ethical advisor.

ArminasX Saiman (Real-Avatar®), a leading business owner in Second Life, shares with us his reflection on “Barriers to Efficient Virtual Business Transactions.” The author has owned and operated such a virtual business for over two years, beginning from sales of a single virtual product on a web-based sales service in 2006, growing to a large in-world operation selling over 200 unique products today.

Robert Bloomfield, a co-editor of this issue, sketches the features required of a platform, “World of Bizcraft,” that supports virtual worlds dedicated to research and education on business-related topics. His discussion leads to some advance features that could really benefit real virtual business and not just “research and education.”

I want to thank the authors (real and real-avatar®) for sharing their ideas about virtual worlds technology, economy and standards. They demonstrate the global interest and diverse contributions needed for this endeavor. Special thanks to my co-editors of the issue Robert Bloomfield and Jean Gelissen, as well as the Journal team Jeremiah Spence and Andrea Muñoz. I look forward to advance standards for virtual worlds, with them – and with you – so that we can all enjoy richer, safer, and more powerful virtual worlds sooner.

17


Journal of Virtual Worlds Research- Overview: State of VWs Standards in 2009 18

Bibliography Alexander, R. (2003). Massively multiplayer game development. Hingham, MA: Charles River Media, Inc. Alexander, R. (2005). Massively multiplayer game development 2. Hingham, MA: Charles River Media, Inc. Bartle, A. (2004). Designing Virtual Worlds. Berkeley, CA: New Riders Publishing. Bimber, O., & Raskar R. (2005). Spatial augmented reality: Merging real and virtual worlds. Wellesley, MA: A K Peters. Blind K. (2008). A Welfare Analysis of Standards Competition: The Example of the ECMA OpenXML Standard and the ISO ODF Standard. Paper submitted to the 6th ZEW Conference on the Economics of Information and Communication Technologies. Burdea, G., & Coiffet P. (2003). Virtual reality technology. Hoboken, NJ: John Wiley & Sons. Gaillard, John (1934). Industrial standardization: Its principles and application. New York, NY: H. W. Wilson Company http://www.dryesha.com/2008/10/virtual-worlds-sos-q3-2008-state-of.html (11 October, 2008) http://www.research.ibm.com//files/standards_wikis.shtml (Retrieved 11 October, 2008) Interactive processes of innovation: International organization for standardization [ISO] (1990). A vision for the future: Standards needs for emerging technologies. Geneva, Switzerland: ISO. Kent, William (1978). Data and reality: Basic assumptions in data processing reconsidered. New York, NY: North-Holland. Murugensan, S., (Ed.) (2008). Finding the real world value in virtual. Cutter IT Journal for Information Technology Management, 21 (9). Office Open XML. (2008). In Wikipedia, The Free Encyclopedia. Retrieved 8 November, 2008, from http://en.wikipedia.org/w/index.php?title=Office_Open_XML&oldid=250256848 OpenDocument. (2008). In Wikipedia, The Free Encyclopedia. Retrieved 8 November, 2008, from http://en.wikipedia.org/w/index.php?title=OpenDocument&oldid=250404349 Perkins, David N. (1986). Knowledge as design. Hillsdale, NJ: L. Erlbaum Associates. (p.126) Report on global standardization: Office of Technology Assessment [OTA] (1992). Global standards: Building blocks for the future. Washington, DC: OTA. Sivan, Y. (2008). “3D3C real virtual worlds defined: The immense potential of merging 3D, community, creation, and commerce.” Journal of Virtual Worlds Research, 1 (1). Sivan, Y. (2000). Knowledge age standards: A brief introduction to their dimensions. In K. Jakobs (Ed.), IT standards and standardization: A global perspective. Hershey, Pa: Idea Group Publishing. Taylor, T. (2006). Play between worlds: Exploring online game culture. Cambridge, MA: MIT Press. Thompson, M. (2009). “Real banking coming to virtual worlds.” Ars Technica, 20 March 2009 (http://arstechnica.com/gaming/news/2009/03/real-banking-coming-to-virtual-worlds.ars). Verman, Lal Chand (1973). Standardization: A new discipline. Hamden, CT: Archon Books.

18


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Measuring Aggregate Production in a Virtual Economy Using Log Data By Tuukka Lehtiniemi, Helsinki Institute for Information Technology

Abstract Virtual worlds contain systems of resource allocation, production, and consumption that are often called virtual economies. A virtual world operator has an incentive to monitor the economy, and users and outside observers benefit from temporal and cross-economy comparisons. Standard methodology of computing macroeconomic aggregates would allow this analysis, but such methodology is currently unavailable. I fill this gap by employing the concepts of national accounting. I focus on virtual economies where the production of new virtual goods takes place as the users expend inputs to produce predetermined outputs along predetermined production paths. Previous attempts at measuring the aggregate production of a virtual economy have been based on non-standard methods and externally collected data. In virtual economies the operator can collect extensive data automatically—a characteristic feature that should be reflected in any standard accounting scheme. Macroeconomic aggregates for a national economy are computed using the System of National Accounts, which is intended for measuring a national economy vis-à-vis the rest of the world. In a virtual economy context, by contrast, I make the distinction between production by the users and creation of goods by the virtual world code. These principles result in an aggregate measure called the Gross User Product, which measures the aggregate output of production activities by the users. I measure GUP for the virtual economy of EVE Online, based on extensive log data collected by the operator. The demonstrated method is generalizable for quantifying virtual economies on the macro level. Keywords: virtual economy; economics; macroeconomic indicators; aggregate production; inflation. This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Measuring Aggregate Production 4

Measuring Aggregate Production in a Virtual Economy Using Log Data By Tuukka Lehtiniemi, Helsinki Institute for Information Technology From an economic point of view, a virtual world becomes interesting whenever economic interactions exist between its participants. Most popular contemporary virtual worlds have a designed economic system of some sort. These systems mimic real-world economies: the users control virtual property, it may be possible to employ inputs to produce output, there are often markets in which outputs can be traded, and usually there is a virtual currency that is used as a mean of exchange. Sometimes there are non-desired phenomena like inflation (e.g. Castronova, 2001, p. 33) or hyperinflation (Simpson, 1999) that are, on the surface at least, similar to realworld economic phenomena. These observations have lead to naming the designed economies of virtual worlds ‘virtual economies’ (e.g. Bartle, 2003; Burke, 2002; Castronova, 2001). Virtual property and the spaces in which they can be found are digital, and exist as entries in a service operator’s database. Many of the popular virtual worlds are called ‘games’, a label that has connotations of trivial or negative effects on ‘real’ life (Yee, 2006, p. 38). When assessing economic value, these facts should be irrelevant: willingness to pay and sacrifice time should be seen as the ultimate arbiter of significance (Castronova, 2002, p. 15). Virtual objects carry actual, real value. This value is realized via the phenomenon called real-money trading of virtual property. One estimate placed the volume of real-money trading of virtual items to USD 2.1 Billion, globally, in 2006 (Lehtiniemi, 2007). New possibilities for economic analysis have opened with the phenomenon: for example, economic experiments have been conducted in virtual worlds (Chesney et al., 2007; Nicklisch & Salz. 2008), and the determinants of prices on secondary markets for virtual property have been evaluated (Castronova, 2004). The users participate in a virtual world voluntarily, and usually pay for the privilege in some form. A working and sufficiently stable economy is arguably instrumental to a satisfactory user experience. Currently, there are no standard measures of the state of a virtual economy. Using quantitative measures, the operator could monitor the outcomes of design changes. The measures would enable economic modeling, a prerequisite for predicting implications of intended changes. Such measures would benefit the users by offering detailed economic data for decision-making purposes, and outside observers would be interested in comparative analysis between virtual economies. In this article, I develop a standard way in which operators can quantify virtual economies. The focus is particularly on measuring the level of economic activity. Unlike the operator of a virtual world, who can duplicate any virtual goods in the virtual world essentially free of cost on the margin, the users act under strict budget constraints, and their actions are those that are paid attention to. As a first approximation, the operator can follow the number of users and the time spent in the virtual world. Better measures, such as an aggregate production measure, allow the operator to do more detailed comparative analysis within the virtual economy. An aggregate production measure was actually an important part of one of the first studies of virtual economies (Castronova, 2001).

4


Journal of Virtual Worlds Research- Measuring Aggregate Production 5

I take a more rigorous approach to measuring aggregate production of the users, one based on employing the possibility of gathering comprehensive production data from a virtual world. My requirements for the aggregate production measure are as follows. Firstly, it has to represent the activities of the users. Secondly, it has to allow for comparative (for example, temporal) analysis within the virtual economy. Thirdly, it has to enable quantification of effects of design changes and the use of economic models. And finally, it has to allow for some sort of comparative analysis between virtual economies. My approach is based on the principles of the United Nations System of National Accounts (SNA), according to which internationally consistent macroeconomic accounts (for example, GDPs of national economies) are formed. As will be shown, SNA cannot be directly used for measuring virtual economies, but it makes sense to employ its tried principles. In this article, I first outline what I mean by a virtual economy and what basic economic concepts, such as production, mean in its context. Next, I develop an accounting system for measuring the value of goods and services produced by the users during a period of time within the boundaries of a virtual economy. I employ standard macroeconomic flow chart analysis for this purpose. The developed accounting system draws from SNA, but stresses production by the users, as opposed to production within a national economy. Finally, I show how the aggregate production measure can be computed in practice, employing comprehensive production and market data logged by CCP games, the operator of the virtual world called EVE Online. To my knowledge, my approach and the employed data are unique in the field. Virtual Economies The acts of designing economy-resembling activities and then giving them the label ‘economy’ does not invoke an economy – at least in the sense that the term is used in the context of economics. For example, the possibility of purchasing virtual goods from the service with set prices does not justify calling the service a virtual economy. Analogously, using the term ‘physics’ to describe the outcome of a computer program that automates the rules governing movement of virtual objects in virtual world does imply there are physics inside the virtual world. Definition According to a textbook definition, an ‘economy’ is a system that determines what is produced, who produces it, and who consumes the products (e.g. Stiglitz & Driffill, 2000, pp. 910). The decisions and choices are made subject to scarcity. The products can be goods or services. Economies are often, but not necessarily, thought to exist inside certain geographic boundaries. A national economy and the Robinson Crusoe economy make intuitive sense with respect to this definition. A virtual economy, then, is a system that determines what is produced—as well as by whom, and for whom. The products are virtual goods or services, and the production happens when a user expends inputs via an online service. Instead of geographical boundaries there is an architectural (cf. Lessig, 1999, p. 25) boundary: the scope of one virtual economy is the extent of the context in which the products can be consumed. For a virtual economy to exist in an online service some sort of production must be possible – the users have to be able to employ inputs to 5


Journal of Virtual Worlds Research- Measuring Aggregate Production 6

create outputs. For the allocation part to be possible, the users have to be able to exchange the products. A virtual economy emerges naturally in a typical virtual world in which in which the following holds true. There must be massively many users, users that use their ultimately scarce resource of time to gain possession of virtual items, and there must be many more virtual items than any one user may earn by simple time investment. Finally, there must exist possibilities of exchanging these items with other users. Some mechanics may have a role as a catalyst for economic phenomena: for example the mechanics of a virtual world are often designed so that specialization is possible, encouraged, or compulsory. Despite this, a virtual world is not what justifies the existence of a virtual economy, and neither are the designed mechanics such as the sales of virtual goods by the operator. The users’ employment of inputs to create outputs, the resulting forms of virtual property and services, and their exchange are what invoke the economy. Economic Agents The economic agents of a virtual economy can be divided into two classes. In all virtual economies there are characters (avatars) that are the users’ representations in the economy. Often there are also NPCs (non-player characters, e.g. Bartle, 2003, p. 287), characters operated by computer code, some of which take part in the economic transactions in the world (see e.g. Simpson, 1999). Typical examples of such NPCs are the ones that partake in market transactions with the users. There is a fundamental economic difference between the user characters and the NPCs. The former operate according to a budget constraint, whereas the latter do not (at least not necessarily) – for example an NPC shopkeeper does not necessarily make profits (e.g. Simpson, 1999). Instead, they supply and demand goods for a fixed price, creating and erasing goods and currency – or the relevant database entries – based upon need. The presence of the NPCs gives rise to what have been called (Simpson, 1999) two overlapping economies: the player economy and the NPC economy. The importance of the NPCs varies: they are almost nonexistent in some virtual economies, and a major purchaser of some produced goods or the major supplier of some goods in others. Production of New Virtual Goods In terms of production, virtual worlds can be roughly divided into three types. The first type is characterized by a lack of production of new virtual goods. Habbo, a social world targeted at teenagers, is one example: virtual goods enter the circulation as the users purchase dedicated virtual currency from the operator, and then use this currency to buy virtual goods, again from the operator. Instead of expending inputs to produce output, the users consume income from other sources on virtual goods. In the second type, predetermined virtual goods can be produced via predetermined production paths. The operator designs the goods and the production paths. This is probably the largest virtual world type; most massively multiplayer online games, for example World of Warcraft, fall into this category. In the third type the users can create genuinely new kinds of virtual goods and make technical innovations regarding production paths. The non-game virtual world Second Life, in which the users can use their graphic design

6


Journal of Virtual Worlds Research- Measuring Aggregate Production 7

and coding skills to design and implement new virtual goods, is one example (e.g. Ondrejka, 2004, pp. 4-5). The focus of this article is on virtual worlds of the second type. Two methods of production of new virtual goods can be identified for such virtual worlds (Simpson, 1999). The first method mimics production processes of physical goods: the users produce raw materials and refine them into final goods, possibly through multiple stages of intermediate production and via inputs of multiple, specialized users. The raw material production typically happens by mimicking some real-world raw material production process, such as mining of ore. The second method does not have direct analogy in physical good production. As an example, a user locates a NPC monster, and attacks it. If the user is successful, there is a possibility that a new good appears, as if ‘dropped’ by the NPC. The process can be thought of as somewhat resembling hunting, except that the proceeds of the hunt can be final goods. (See Simpson, 1999). An Existing Measure of Aggregate Production The only previous transparent aggregate production measure for a virtual economy has been computed for an online game called EverQuest (Castronova, 2001). This measure has been quoted widely, and it is therefore worthwhile to investigate it thoroughly. Due to unavailability of direct production and expenditure data, Castronova measured aggregate production based on data from a survey and from the secondary real-money trade market. It is useful to break down the underlying method as follows: Each character in EverQuest has an indication of advancement, a level, associated to it. Castronova used USD prices, gathered from the Internet auction site, of characters to form a price for one character level ( p ) by regressing observed price against level. Based on survey responses, he determined the number of hours ( hl ) a user uses, on the average, to gain a level by regressing the gained levels on the amount of time used. Finally, the average number of concurrent users in EverQuest ( N ) and the number of hours in a year ha yields an estimate of the total USD value created in a year (here called V ) in EverQuest:

V=N

pha hl

(1)

Dividing V with the average concurrent users ( N ), Castronova ends up in an aggregate per capita production measure, or what he calls GNP per capita of EverQuest. He compares the per capita figure to the per capita GNPs of real-world economies and argues, perhaps halfseriously, that the virtual world of EverQuest is the 77th richest country in the world (Castronova, 2001, pp. 41-42). If the last step, division with the concurrent users N , is carried out on the above equation, the per capita measure can be written as ph V = a. N hl

(2)

7


Journal of Virtual Worlds Research- Measuring Aggregate Production 8

Hence, Castronova’s aggregate production per capita is a function of only two variables: the price of one character level and the number of hours used per level. A caput is assumed to stay online around the clock, producing ha hl levels, each of value p , per year. In contrast, per capita measure of GNP is GNP divided by the total population (e.g. Begg et al., 2003, p. 285). The reports on ‘virtual world GNP’ numbers tend to, unsurprisingly given the comparison to GNP per capita of national economies, first present the above per capita value and next discuss the total number of users (e.g. BBC, 2002). This implicitly inflates the total monetary value. Even if the per capita methods were comparable, are national economies the best standard of comparison for virtual economies? Such a comparison is a statement of similar value as the statement that the output of a firm is equivalent to some share of the GNP of some national economy – that is, it has value as a provider of context. A measure based on external prices is not reliable for any comparative analysis, be it within the economy or between economies. These prices can fluctuate irrespective of what happens inside the virtual economy: for example, the operator may revise their policy towards real-money trading, making selling riskier. Ceteris paribus, supply curve shifts upward and prices rise accordingly. If aggregate production inside the economy, in real terms, remains constant, its external value rises. With the available public data, Castronova’s measure may be a good proxy for the USD value creation inside EverQuest. It should not be called GNP, however, and it should not be compared to GNP values of national economies as if they measured something directly comparable. Due to its method of computation, the operator cannot effectively use the measure for monitoring purposes. A better approach, and one that can be readily taken by the operator, is to rely on the principles of national accounting. As will be shown below, production and expenditure data collected inside the virtual economy can be employed towards this end. A New Measure of Aggregate Production A Simplified Flow Diagram Let us first consider only the manufacturing activities of the users. In this simplified situation the production activities can be presented in the circular flow (cf. Begg et al., 2003, p. 275) in Figure 1. Instead of the standard sector division (households, firms, and government), the activities of the agents in the economy are separated into two roles: an agent can act either as a producer or a consumer. A producer is not necessarily associated with any firm that pays wages to its employees: instead, each agent in the economy has a dual role. Each of them may act at some point in time as a producer, and in another point as a consumer. An agent may produce items to be sold on the market, or produce items for her own use. In national accounting, the latter is called own-account production (United Nations [UN], 2003, p. 24). Like market production, own-account production is valued at market prices or, if market prices are not available, using production costs (United Nations [UN]. 2001). In a virtual economy, the share of own-account production is potentially large – a large share of total income is paid in kind by the producer role to the consumer role. The agents may be thought of as entrepreneurs who produce items both for the market and for own consumption.

8


Journal of Virtual Worlds Research- Measuring Aggregate Production 9

Figure 1 Simplified flows of expenditure and income. The users’ activities have been divided into two roles.

In Figure 1, the upper arc of the flow diagram represents total expenditure, and the lower arc represents total income. The producers produce the final goods and services according to the total expenditure consisting of final consumption C and investments I . The interpretation of C , I and savings S are standard (see e.g. UN, 2003, p. 25). The aggregate production in this economy would equal the value of C + I in some period of time. Since there are no flows out of the system, aggregate production can be measured either as aggregate factor income or as aggregate expenditure. There is also the alternative form of production, which was introduced as the drop method. Introduction of this new mean of production does not require adding sectors or flows to the flow diagram, but it may make differentiating between consumption C and investments I difficult. The outwardly same use of a good can often be categorized as either consumption or production. For example, when users purchase weaponry, they may use their purchases for consumption by attacking other users. Alternatively, they may attack NPCs and produce new goods. In this case their purchase was an investment. The Environment The flow in Figure 1 is, naturally, an overt simplification. The users are usually not the only agents that participate in the flows of expenditure and income. There is also a sector that represents the operator of the virtual economy. I shall refer to this sector as the Environment. The Environment is a metaphorical entity that collectively represents everything that is not operated by the users. In practice, the Environment may include for example code-operated sellers of intermediate goods. When users produce something by gathering raw materials, refining them into intermediate products, and finally producing a virtual final good, the value of the final good represents all value additions through the production process and all received incomes of the participants of the production process (Figure 1). If an intermediate good is purchased from the Environment, its value is still reflected in the value of the final good, but there is no corresponding income received by any consumer. The Environment sector, then, affects the manufacturing flows as presented in Figure 2. The purchases of intermediate and investment goods ( E m ) from the Environment leak out of circulation. The most closely fitting analogue for 9


Journal of Virtual Worlds Research- Measuring Aggregate Production 10

the Environment in SNA is the foreign sector. In national accounting, intermediate goods bought from the foreign sector are subtracted from the total expenditure, as they are not associated with corresponding factor incomes inside the national economy. Treatment of E m should be similar to exports: subtract the value of all intermediate goods purchased from the Environment.

Figure 2 Expenditure and income with part of intermediate and investment goods (Em) flowing out of circulation.

In a national economy, a relevant borderline can be drawn between domestic and foreign production (or receiving of income). The former are included in GDP, whereas the latter are not, explaining the ‘domestic’ part in GDP. This distinction is clearly not relevant for a virtual economy. When measuring the users’ production activity in a virtual economy, a relevant borderline exists between the production the users and the creation of goods by the service in which the virtual economy takes place. I will, from now on, call the measure of the aggregate production in a virtual economy the Gross User Product, or GUP, of the economy. Further Roles of the Environment Purchaser and seller of goods. The role of intermediate and investment goods purchased from the Environment was discussed above. The reciprocal role of sales of these goods to the Environment is clear: they represent income to the users, but do not show up in the prices of final goods. Their value should, then, be added to the GUP, giving rise to a new item E x . When the Environment sells final consumption goods, their value represents consumption expenditure flowing out of the system, similarly to consumption good imports in SNA. When the Environment purchases user-produced final goods, they should be treated similarly to goods exports in SNA - they give rise to factor incomes to users but do not show up in either consumption or investment expenditure. All final good purchases from Environment are, then, included to the term E m and all final good sales to Environment to the term E x . Using these principles, the treatment of various kinds of goods purchased from and sold to the Environment can be decided on. The foreign trade analogy suits production of goods for the purpose of selling to the Environment. It also suits goods that may not be consumed nor produced inside the economy, but can be purchased from the Environment, transported, and sold again to the Environment with a premium – much like goods that are transited through one 10


Journal of Virtual Worlds Research- Measuring Aggregate Production 11

country to the next country. The transit gives rise to factor incomes, and the added value of such transit should be reflected in GUP. Purchaser and seller of services. One of the economic roles of the Environment is to purchase services from the users. The Environment may, for example, provide task for the users to complete. The payouts of theses tasks should be included in GUP: the users use time and other inputs to produce a service, which they sell to the Environment. The payouts flow to the users as incomes, but the flows do not originate inside the economy. The effect is similar to exports in national accounting, and their value should be included in E x . The Environment may also sell services. For example, the users may be able to hire an NPC to sell their virtual goods (Simpson, 1999). The services may be either final services, in which case a part of final consumption expenditure flows out of circulation, or intermediate services, in which case they drive a wedge between expenditure and income. Both types of services have an effect similar to imports in SNA, and should be included in E m . Sometimes these services may be presented in guise of taxes. Collector of taxes. In addition to the many foreign-sector-like roles, the Environment also performs actions that bear resemblance to the actions of a public sector. The users often pay compulsory fees that resemble taxes. For example, each transaction event on the market or each manufacturing event may be subject to a fixed or proportional tax. There are other similar compulsory payments, though they may not always be called taxes. In the national accounting context there is, however, an important difference between these tax-like payments and taxes collected by the public sector in a real-world economy. The Environment sector does not usually operate subject to a budget constraint (Bartle, 2003, pp. 265-266), and therefore it does not actually redistribute income by means of taxes and subsidies. The tax-like payments made to the Environment sector trickle out of the macroeconomic flow of expenditures and incomes. Their role is actually just this: they remove money from circulation, enabling control on the money supply. A government sector is absent from the macroeconomic flows depicted in this study. Instead, in the accounting sense, taxes and similar payments resemble imports of services more than anything else. In one important sense taxes in virtual worlds do conceptually resemble taxes: a tax payable by the producer on sold goods drives a wedge between prices paid by the consumers and received by the producers. The Gross User Product The roles of the Environment are summarized in the macroeconomic flows in Figure 3. The new flows include the value of goods and services purchased by the Environment ( E x ); net non-investment payments, such as market taxes, that the producers make to the Environment ( E p ); and net tax-like payments the consumers make to the Environment ( E c ). The flow E m is augmented with the components described above.

11


Journal of Virtual Worlds Research- Measuring Aggregate Production 12

In the flow chart of Figure 3, GUP can be viewed as the sum of user expenditures depicted on the upper arc. Summing the expenditure components yields

GUP = C + I + E x − E m .

(3)

Figure 3 Complete flows of expenditure and income in a virtual economy.

This, on an abstract level, is the basis for GUP measurement. GUP consists of expenditures on consumption and investment, and net sales of goods and services to the Environment sector. The equation resembles the standard expenditure equation describing the GDP components (e.g. Stiglitz & Driffill, 2000, p. 397). Conditions of a specific virtual economy obviously affect what, exactly, the terms in the GUP equation should include. These conditions also affect the valuation principles: market taxes and other fees can induce producer and consumer price discrepancies, and the exact mechanisms of trading may affect determination of market prices. I present an example of practical GUP measurement in the following section. Measuring Gross User Product I use EVE Online (EVE) as an example virtual economy for which GUP is computed. EVE, an online game set in a science fiction background, is run by CCP Games based in Iceland. It had about 220,000 paying users at the end of the year I consider (2007) (Guðmundsson & Halldórsson, 2008, p. 4). It is targeted at a mature audience, the average player age being 27 years (Lehdonvirta, 2006). The Western market is covered by a single instance of the virtual world of EVE. All users are, then, agents in the same economy. This makes the economy large in comparison. Though there are games with vastly more users, their user bases are typically distributed among several instances of the game world, so that a few thousand users can potentially interact with each other in any one instance. The users of EVE pay a subscription fee of around 15 € per month to gain access to the game. The operator does not directly sell virtual property, but an unsanctioned secondary market exists on for example several dedicated Internet marketplaces.

12


Journal of Virtual Worlds Research- Measuring Aggregate Production 13

I employ market and production data from a database collected by CCP Games and made available to researchers by a collaboration agreement (see HIIT, 2007). Relevantly to the topic at hand, the database consists of, in principle, a complete set of production and user-to-user and user-to-Environment transaction data from the EVE economy within the time period from January to June 2007. Production and Valuation According to its story, the virtual world of EVE is set in distant future. The forms of virtual property include spaceships, high-tech equipment, exotic minerals and metals. Production of new virtual goods in EVE happens by the manufacturing and the drop production processes. Many, though not all, final goods can be produced from intermediate goods by the users. Production happens via predetermined production paths. The drop-type production takes place as the users receive virtual goods that appear upon destroying various kinds of NPCs. Users produce also services, for example complete scripted tasks called missions, or locate and destroy NPCs for a reward. A large part of trading in EVE happens using a built-in market feature. The ‘market’ is an exchange where users list buy and sell offers. The goods are priced using the virtual currency of EVE, called ISK. Perfect information on the goods is available, and the platform offers a trusted way of completing transactions. In SNA, market prices are used for valuation of goods (UN, 2001). I use the periodically averaged prices agreed upon using the market feature as market prices. There are also other ways of completing transactions: users can barter and set up auctions. Prices agreed upon using these features may be ambiguous: many goods can be bundled together. They are also more likely to be economically non-significant, that is production and purchase decisions are based on arrangements other than the price. Transactions using the market feature are subject to a varying proportional market tax payable by the party listing the buy or sell order. The market prices, then, have properties of both producers’ and purchasers’ prices. Despite the discrepancy between what the purchaser pays and the seller gets, the market prices shall be defined as the prices observed on the market, inclusive of taxes. To correct some of the error introduced by this definition, the taxes actually paid are excluded from GUP. These taxes most closely correspond to the term E p in Figure 3. This yields Equation 4 as the GUP measurement basis: GUP = C + I + E x − E m − E p .

(4)

A source of error is introduced here: the market tax and collected regardless of whether the exchanged good is a freshly produced good or a second-hand good. The market tax size is small, however, and the errors introduced are insignificant in practice (see Table 1). Market prices are not always available for seldom-exchanged goods. The SNA approach to valuing such production is to use production costs as the second-best alternative (UN, 2001). Production costs are defined as the sum of intermediate consumption, compensation of employees, consumption of fixed capital, and net taxes on production (UN, 2003, pp. 21-22). In this analysis, the production costs include intermediate goods and partially used fixed capital goods using market prices. Compensation to other factors of production is not considered.

13


Journal of Virtual Worlds Research- Measuring Aggregate Production 14

Measurement Practices Above, it was discussed that investment and consumption expenditures may be impossible to separate reliably. This is the case in EVE Online. A breakdown of final expenditures to investment and consumption shall not be attempted in this analysis. The approach to measuring GUP that I employ is based on a combination of the production approach and the expenditure and income approaches (UN, 2003, p. 5). The value of produced final goods, including consumption and investment goods, is measured by observing production events. The products are valued using market prices. This also includes the values of input intermediate goods. Then, referring to Equation 4, the remaining items are identified and their contribution to GUP is measured using expenditure and income data. The value of manufactured goods is measured by observing the manufacturing events and using the valuing convention outlined above. The goods produced by the drop method are measured by a probabilistic approach, based on the number of destroyed NPCs and the probability of the appearance of different goods. Measuring the value of the newly produced final goods will end up including user consumption of user-produced final goods, user-produced fixed investments, and user-produced final consumption and final investment goods sold to the Environment. These items should be included in GUP, with two reservations. First, some final goods must be produced from other final goods. The final goods used up in the production process are subtracted from the total value of produced final goods. Second, the value of investment goods purchased from the Environment is included in the prices of final goods. Intermediate goods are easy to identify, but, as discussed previously, it is difficult to separate investment from consumption. I considered a subset of purchases from the Environment as likely to be made to add production capacity and regarded them as fixed investments. There is a subjective element in this classification, and some error is likely introduced. When the user-produced final goods have been included and the above corrections made, the next step is to compare the outcome of this production approach to Equation 4. The value of produced items together makes up a part of each of the items C , I , E x , and E m , but does not complete GUP. To complete it, some items need to be added, including the value of intermediate goods sold to the Environment, the total value of services sold to the Environment, and the net value of transit goods sold to the Environment. All three are measured using expenditure data. Finally, the paid market taxes (the term E p ) are deducted from the resulting figure to end up in GUP as defined in Equation 4. There are two omitted categories of production that are not, but should be, included. These production types are the services users sell to other users, and increases in stocks of intermediate goods. Both are left out due to practical difficulties—that is, the unavailability of data. This obviously introduces some error in the GUP measurement. GUP at Current Prices The available data allows for GUP computations for the time period between the beginning of January 2007 and the end of June 2007. The GUP in June 2007, and its main components, is presented in Table 1. The total GUP of that month is 3.47*1013 ISK, or 34.7 Trillion ISK. 14


Journal of Virtual Worlds Research- Measuring Aggregate Production 15

Table 1 Breakdown of GUP in June 2007.

Component

Value

a

b

Contribution

Net manufactured final goods

18.6

+

Final goods dropped by NPCs

10.9

+

Services sold to Environment

11

+

Net goods from Environment

5.4

-

Market taxes to Environment

0.4

-

GUP total

34.7

a

value in trillion ISK

It is evident that the main contribution to GUP comes from produced final goods (including both production methods). The net good ‘imports’ from the Environment include mainly investment goods purchased from the Environment, intermediate goods both purchased from and sold to the Environment, and the added value of transit goods. The selected month is representative: the changes in the component shares have remained within three percent units over the six-month period. The error induced by the subjectivity regarding the inclusion of items in the fixed investment category is relatively small. Assuming the value of fixed investments from Environment increases or decreases by 20 %, the GUP value of June 2007 would decrease or increase (ceteris paribus) by around 4 %, respectively. The main problem of the GUP presented in Table 1 is that it is not in any context. Is the 34.7 Trillion ISK a large or a small number? How has GUP evolved over the past months? There are two ways that both partially solve the first of these problems. The first way is to measure the total value of all goods in the users’ inventories. Currency is excluded from this value. In the end of June 2007 this value was, using current prices and the valuation principles employed in the GUP measurement, roughly 1.74*1015 ISK. The GUP of period 6 is about 2 % of this total figure. The accumulated wealth in the economy is significant, especially considering the fact that GUP represents production in gross terms. The other way is to convert the GUP of June 2009 to US dollars. Unfortunately, reliable statistics of exchange rates from ISK to USD are not available. Something giving an idea of the order of magnitude of the exchange rate can be found by looking at the lowest sell offers listed on one publicly available real-money trading platform1 of the time. These varied between USD 50.00 and 53.18 for one billion ISK during June 2007. The USD value of GUP in that month, using the lowest exchange rate, would be around 1.74 Million. The number of users (total, not concurrent) in EVE at the time was 172,000 and, therefore, the monthly GUP per user was around USD 10. The potential error introduced by the exchange rate uncertainty cannot be overstressed.

1

Sparter.com, a site that is not operational anymore. The price data was collected from the website during June 2007. These prices represent listed sell offers, not necessarily realized transactions. Additionally, all completed transactions were subject to a 20 % brokerage fee.

15


Journal of Virtual Worlds Research- Measuring Aggregate Production 16

The second of the problems calls for further analysis. Monthly GUP numbers as such are not comparable: if the overall price level has changed, nominal GUP may change differently from the real GUP, and the changes in aggregate production are veiled by the price changes. Deflated GUP and Economic Growth To allow for growth considerations, GUP has to be purged of price changes. One of the most closely watched price statistics (Wynne & Sigalla, 1994, p. 1), consumer price index (CPI), is an immediate suspect. A CPI is not available on demand for the EVE economy. CPI is not an optimal measure either: the changes in patterns of user expenditure are potentially very fast in the EVE economy, much more so than in a national economy. For example, when the operator introduces an update including new goods or new production paths, it can have a significant effect on the users’ expenditures, essentially overnight. CPI is based on samples – due to practical reasons – and unable to react quickly to shifts in expenditures (e.g. Wynne & Sigalla, 1994, pp. 4, 13). The index to be chosen here has to be able to accommodate for substitution effects in consumption due to introduction of new goods, but be easily calculable and interpretable. One index that fulfills both requirements is a chained Fischer index FC (Forsyth & Fowler, 1981, p. 228). The index at period t , FCt , is based on the prices p and quantities q of exchanged goods on periods t and t − 1. Technically, the Fischer index is a geometric mean of two indices, the Paasche and Laspeyres indices (Ibid.), at the period under consideration:

FCt =

t t −1 i i

∑p q ∑p q ∑p q ∑p q t t i i

i

i

t −1 t −1 j j

j

t −1 t j j

j

(5)

The chained Fischer index recognizes that quantities may change during the interval between two successive periods. The prices and quantities considered here are those of exchanged final goods. The underlying logic is that as the utility arising from consumer goods and services defines the prosperity of an economy, the prices of these products should be the basis of measuring inflation (Bryan & Pike, 1991). As was discussed above, the users are not only consumers of final goods – they use the very same goods also for production. Therefore, some purchases affecting the bundle in the index will actually be investments instead of consumption. The monthly inflation in the EVE economy, FCt | t ∈1...6 , is presented in Figure 4. During the investigated six-month period, inflation has actually been negative – the economy has experienced deflation. Deflation has been increasingly fast. A constant monthly deflation of 6 %, for example, translates to a yearly deflation of more than 50 %. The reasons underlying the deflation can be numerous, including the usual suspect of real output increasing faster than the stock of money (e.g. Schwartz, 1973, p. 264).

16


Journal of Virtual Worlds Research- Measuring Aggregate Production 17

Figure 4 Monthly inflation rates in the EVE economy. The observations span the period between January 2007 (period 1) and June 2007 (period 6).

It is now possible to return to the analysis of GUP values computed in the previous t section. The periodical GUP at current prices ( GUPcur ) can be deflated to the first period t ( GUPdef ) using the principle of chained multiplications of the chained Fischer indices (cf. Forsyth & Fowler, 1981, p. 228): t

t t GUPdef = GUPcur � FCj

(6)

j =2

The deflated GUP from January 2007 to June is presented in Figure 5. In that figure, the actual values of GUP are replaced by an index comparing the GUP values to the value in the first period. Before indexing, all values of GUP are deflated to period 1—that is, to January 2007, using Equation 6. The effect of varying month length has also been purged. The index of GUP does not, however, take into account the increase in the number of users. The user base, measured by the number of paying users, has grown about 12 % during the investigated six-month period: from about 154,000 in January 2007 to about 172,000 in June 2007. As the user base of EVE Online has increased during the investigated period, some part of the GUP growth comes from the increased number of users and the rest from increased production efficiency. The deflated monthly GUP purged of the effect of increased user base is also presented as an index in Figure 5. The method assumes a temporally constant input time per user. During the half-year period the monthly GUP has increased around 70 % and the monthly GUP per user over 50 %. The monthly growth rate of GUP per user has varied through the investigated period, being about 1 % from January to February and reaching over 13 % from March to April. The latter figure is significantly high; it gives rise to a yearly growth of over 300 % assuming constant growth rate. The increase in production efficiency, measured in terms of average GUP produced per user, has been significant.

17


Journal of Virtual Worlds Research- Measuring Aggregate Production 18

Figure 5 Deflated monthly GUP and deflated monthly GUP per user between January 2007 (period 1, index value = 100) and June 2007 (period 6).

Conclusions and Discussion I have shown that the principles of SNA can be used to form an accounting scheme for measuring the aggregate production activities of the users in a virtual economy. The main difference in national accounting and virtual economy accounting lies in the definition of what is considered as relevant production to be included. Gross domestic product, as defined in SNA, is intended for measuring the production that takes place in some geographic location. The borderline is drawn between domestic and foreign production. In a virtual economy, the physical ‘where’ of a production event is meaningless, and the ‘domestic’ metaphor is not usable. The main finding of this study is that the borderline has to be drawn between the users and the service they use, or the Environment. The name of the SNA-based aggregate production measure, the Gross User Product, emphasizes the relevance of the users as producers, as opposed to the creation of new goods by the Environment. Once this distinction is made, it is straightforward to compute the GUP. SNA is referenced in all stages, but instead of value added in the domestic sector, the value additions by the users are included. The applicability of this method was demonstrated for the case of EVE Online using transaction and production data provided by the operator firm. A chained Fischer index was employed to purge the GUP measure of the effects of deflation. The ability to centrally log the necessary data for the computations of these measures is a distinguishing feature of virtual economies. The real GUP growth rates purged of the increased number of subscriptions show that the users’ production efficiency has increased. From the operator point of view, this increase can mean that they have to introduce new virtual goods and production paths to compensate for the decreasing required effort level. The production efficiency also offers one explanation for the observed decrease of overall price level, or deflation: as the productivity increases, the prices can be expected to fall in the long run. GUP performs well against the requirements for an aggregate production measure laid out in the first section of this article: it measures the production activities of the users; it can be used 18


Journal of Virtual Worlds Research- Measuring Aggregate Production 19

for virtual economy -specific comparative analysis by the operator; it can be used for monitoring and predicting design changes; and it can be used for comparative analysis between virtual economies. The measured GUP shows that these requirements can be fulfilled also in practice. The six observations of monthly GUP, and the five observations of GUP growth that can be derived from this data, form a very short time series that does not allow for, as an example, time series modeling of economic growth. This is a practical consideration, and nothing that an operator could not overcome. The previous publicly available aggregate production measure, Castronova’s “GNP”, was oriented at approximating the US dollar value of production in a virtual economy. As was shown, it does not perform well against the requirements outlined for an aggregate production measure in this article. This is particularly true for the requirements in connection with economic monitoring, prediction and comparisons. In addition, some drawbacks in Castronova’s method of producing per capita numbers were indicated. As tempting as it may be, comparing numbers produces by Castronova’s method and the GUP method is practically meaningless. This would be true even if the per capita methods were comparable. Comparisons using the USD value of production in virtual economies will end up, for a large part, comparing the state of the realmoney trading markets, and not only the state of the virtual economies. The focus of this study was in virtual economies where production of predetermined new goods happens along predetermined production paths. The principles used for GUP computation in this study are directly applicable to other virtual economies with production methods comparable to the ones considered here. Such virtual economies include the majority of large game-like virtual world in the Western market. Virtual economies with different production methods were not considered, but there is no reason why the same principles could not be used. Users will be the relevant producers, and the Environment will have a role similar to the foreign sector in national accounting. The extent of the role of the Environment is obviously different in different economies. Reliable comparative macroeconomic analysis between virtual worlds is not currently possible. If GUP was used as a standard measure of aggregate production, not only would the possibilities of internal economic modeling be enhanced, but cross-economy analysis of production performance would be enabled. GUP, then, opens possibilities for standardized analysis that have not been available before.

19


Journal of Virtual Worlds Research- Measuring Aggregate Production 20

Bibliography Bartle, R. A. (2003): Designing Virtual Worlds. New Riders, Boston. BBC (2002). Virtual Kingdom Richer Than Bulgaria, BBC News, March 29. Online: http://news.bbc.co.uk/1/hi/sci/tech/1899420.stm Begg, D., Fischer, S., and Dornbusch, R. (2003). Economics, 7th edition, London: McGraw-Hill. Burke, T. 2002. Rubicite Breastplate Priced to Move, Cheap: How Virtual Economies Become Real Simulations. Unpublished manuscript. Online: http://www.swarthmore.edu/SocSci/tburke1/Rubicite%20Breastplate.pdf Bryan, M. F., and Pike, C. J. (1991). Median Price Changes: An Alternative Approach to Measuring Current Monetary Inflation, Economic Commentary. Federal Reserve Bank of Cleveland, December. Castronova, E. (2001). Virtual worlds: A First-Hand Account of Market and Society on the Cyberian Frontier, CESifo Working Paper Series. No. 618. Castronova, E. (2002). On Virtual Economies, CESifo Working Paper Series. No. 752. Castronova, E. (2004). The Price of Bodies: A Hedonic Pricing Model of Avatar Attributes in a Synthetic World. Kyklos. Vol. 57, No. 2, 173–196. Chesney, T., Chuah, S.-H., and Hoffman, R. (2007). Virtual World Experimentation: An Exploratory Study. Occasional Paper Series, No. 2007-21. Nottingham University Business School. Online: http://ideas.repec.org/p/nub/occpap/21.html Forsyth, F. G., and Fowler, R. F. (1981). The Theory and Practice of Chain Price Index Numbers, Journal of the Royal Statistical Society, Series A, Vol. 144, No. 2, 224–246. London: Royal Statistical Society. Guðmundsson, E., and Halldórsson, K. Þ. (2008). Quarterly Economic Newsletter. 4th Quarter 2007, CCP Games. Online: http://ccp.vo.llnwd.net/o2/pdf/QEN_Q4-2007.pdf HIIT (2007). Virtual economy research project starting at Helsinki Institute for Information Technology, Helsinki Institute for Information Technology press release, December 12. Online: http://www.virtual-economy.org/blog/hiit_starts_new_research_proje Lehdonvirta, V. (2006). Interview With CCP: EVE Currency Traders ‘Going to lose big’?, Virtual Economy Research Network, October 2. Online: http://virtualeconomy.org/blog/interview_with_ccp_eve_currenc Lehtiniemi, T. (2007). How Big Is the RMT Market Anyway?, Virtual Economy Research Network, March 2. Online: http://virtualeconomy.org/blog/how_big_is_the_rmt_market_anyw Lessig, L. (1999). Code and Other Laws of Cyberspace, New York: Basic Books. Nicklisch, A., and Salz, T. (2008). Reciprocity and Status in a Virtual Field Experiment, Preprints of the Max Planck Institute for Research on Collective Goods. 2008/37, Bonn. Ondrejka, C. (2004). Aviators, Moguls, Fashionistas and Barons: Economics and Ownership in Second Life. Linden Research. Online: http://ssrn.com/abstract=614663 20


Journal of Virtual Worlds Research- Measuring Aggregate Production 21

Schwartz, A. J. (1975). Secular Price Changes in Historical Perspective, Journal of Money, Credit and Banking. Vol. 5, No. 1, 243–269. Ohio: State University Press. Simpson, Z. B. (1999). The In-Game Economics of Ultima Online, Computer Game Developer's Conference, CA: San Jose, March 2000. Online: http://www.minecontrol.com/zack/uoecon/uoecon.html Stiglitz, J. E., and Driffill, J. (2000). Economics. New York: W. W. Norton & Company. United Nations (2001). United Nations System of National Accounts 1993 Internet access system. Online: http://unstats.un.org/unsd/sna1993/introduction.asp United Nations (2003). National Accounts: A Practical Introduction, Studies in Methods, Series F, No. 85, New York: UN Department of Economic and Social Affairs, Statistics Division. Wynne, M. A., and Sigalla, F. D. (1994). The Consumer Price Index, Economic Review – Second Quarter 1994, Federal Reserve Bank of Dallas. Yee, N. (2006). The Demographics, Motivations and Derived Experiences of the Users of Massively Multi-User Online Graphical Environments, PRESENCE: Teleoperators and Virtual Environments, No. 515, 309–329. MIT Press.

21


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Supporting Soundscape Design in Virtual Environments with Content-based Audio Retrieval By Jordi Janer, Nathaniel Finney, Gerard Roma, Stefan Kersten, Xavier Serra Universitat Pompeu Fabra, Barcelona

Abstract The computer-assisted design of soundscapes for virtual environments has received far less attention than the creation of graphical content. In this “think piece� we briefly introduce the principal characteristics of a framework under development that aims towards the creation of an automatic sonification of virtual worlds. As a starting point, the proposed system is based on an on-line collaborative sound repository that, together with content-based audio retrieval tools, assists the search of sounds to be associated with 3D models or scenes.

Keywords: content-based; audio retrieval; freesound; virtual worlds; soundscape.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Supporting Soundscape Design in VEs 4

Supporting Soundscape Design in Virtual Environments with Content-based Audio Retrieval By Jordi Janer, Nathaniel Finney, Gerard Roma, Stefan Kersten, Xavier Serra Universitat Pompeu Fabra, Barcelona

Virtual worlds are primarily populated with 3D models of real world objects and spaces. While the graphical representation of virtual objects has been extensively addressed, the representation of the sounds they produce is less well supported in currently popular virtual worlds. One example of the imbalance between graphical and sonic content is Google's 3D Warehouse initiative, which serves as a repository of 3D models that can be integrated in virtual worlds. This situation may lead to ending up with visually appealing but sonically poor virtual worlds. Generating the soundscape of a virtual environment is still a tedious manual process. To add sound to a virtual object, the designer needs either to find an appropriate sample from a sound effects database, or to adjust a large number of synthesis parameters in the case of physical modelling. Instead, we propose to use a large on-line collaborative sound repository that, together with content-based audio retrieval tools, can automate the sonification of virtual worlds. Our framework, currently under development, assists the search of sounds associated with 3D models and scenes, partly by relating text queries to social tags in the sound database, and partly by ranking search results using concepts borrowed from ecological acoustics. Characterization of soundscapes The design of sound in virtual environments (VE's) relies on the techniques and traditions of sound design for film and video games (Chion M., 1991). Sound effects are typically created by foley artists or obtained from commercial sound effects databases. With the popularization of internet-based and socially oriented virtual environments, sound design faces new challenges and opportunities. Users generate their own objects and sounds are produced in their interaction with the virtual environment and other users. For this process to be automatic we need to automatically characterize a given soundscape and search for sounds that best fit that characterization. Soundscape classification can be addressed from different perspectives. A classification scheme based on the physical characteristics of the produced sound was proposed by Pierre Schaeffer (1966), which categorizes sounds using three pairs of criteria: (1) Masse, which is a 'fuzzier' generalization of pitch; (2) Facture, which is an energy envelope; (3) DurÊe/Variation, or duration and variability; and finally, the more subjective Équilibre/OriginalitÊ, which is related to the complexity of the signal. Originally published in 1977, R. Murray Shafer (1994) distinguished three types of sounds within a soundscape: keynote sounds, signals and soundmarks. Schafer also proposed a classification of sounds based on the reference to the source: Natural, Human, Sounds and Society, Mechanical, Quiet and Silence, and Sounds as Indicators. More recently, Gaver (1993a, 1993b) has contributed to create a solid framework for ecological acoustics. He proposed a taxonomy of environmental sound, providing specific categories for sounds considering whether they are generated by solids, liquids or aerodynamics. 4


Journal of Virtual Worlds Research- Supporting Soundscape Design in VEs 5

Some systems have already addressed the automatic generation of soundscapes by using existing sound classifications. Using a lexical database, a system presented by Cano et al. (2004) generated a complete ambiance combining sound snippets related to a high-level concept (e.g. “beach”). This system used a structured commercial sound FX database, but an application to virtual worlds could not be done from that since there is a lack of correspondence to the actual objects and the generated soundscape. A recent approach to sound retrieval by Chechik, G. et al. (2008), proposes ranking the results of text queries using content-based audio retrieval techniques. While useful for general audio search in structured and unstructured databases, this method doesn't take into account the specifics of sound design. Therefore, it is still limited for the purpose of creating virtual world soundscapes. Use of collaborative sound repositories The principal contribution of the proposed system is the use of content-based audio retrieval from online collaborative sound repositories, employing concepts from ecological acoustics. Given appropriate interfaces, users, or the actual system, could rapidly find the appropriate sounds for 3D models through web-based search, which would facilitate the creation of soundscapes for virtual worlds. In terms of technology, the proposed system benefits from content-based audio retrieval algorithms. Repository sounds are labelled with user-generated tags called folksonomies (Martínez, E. et al., 2009), which result in an unstructured database. For all sound in the database, a number of acoustic descriptors are automatically extracted. Searching for a sound associated with a virtual object starts with a text query. The system uses the Wordnet lexical database (Fellbaum, C., 1998) to semantically relate the query with the tags of the sound repository. Search results are ranked according to an ecological acoustics taxonomy (e.g. solid, liquid, gas). Ranks for each sound in each of the concepts in the taxonomy are obtained using automatic audio analysis and machine learning classification. In initial experiments, we used the state of the art Support Vector Machine (SVM) classifier LIBSVM by Chang C. and Lin C. (2001). These experiments show that given a sufficient number of examples, a few descriptors suffice to produce reasonable results using this approach. User-generated media might represent an important factor in the expansion of virtual environments. Most popular virtual environments allow users to create and furnish their own spaces. We argue that closed commercial Sound FX databases do not fit into this model—on the one hand because of the prices and licenses associated with their use, and on the other, because they cannot be augmented by users. Therefore, our system uses Freesound.org (2005) as a collaborative sound repository, which currently offers over 70,000 sound snippets under a Creative Commons license.

5


Journal of Virtual Worlds Research- Supporting Soundscape Design in VEs 6

Bibliography

Cano, P. et al, (2004). Semi-automatic ambiance generation, in Proceedings of the Conference on Digital Audio Effects, Naples, pp. 319–323. Chang C. and Lin C. (2001). A library for support vector machines. Retrieved June 2009, from LIBSVM Web Site: http://www.csie.ntu.edu.tw/~cjlin/libsvm . Chechik, G. et al. (2008). Large-Scale Content-Based Audio Retrieval from Text Queries, in Proceedings of MIR’08, Vancouver. Chion. M.(1991). L'audio-vision (son et image au cinema), English translation: Audio-vision, Sound on Screen. Armand-Colin. Fellbaum, C. (Ed.) (1998). WordNet: An Electronic Lexical Database Cambridge, MA: The MIT Press (Language, speech, and communication series). http://wordnet.princeton.edu/ Freesound.org (2005)... Retrieved June 2009, from Universitat Pompeu Fabra Web Site: http://www.freesound.org Gaver, W. W. (1993a). What in the world do we hear? An ecological approach to auditory event perception, in Ecological Psychology, vol. 5, no. 1, pp. 1–29. Gaver, W. W. (1993b), How do we hear in the world? Explorations of ecological acoustics, in Ecological Psychology, vol. 5, no. 4, pp. 285–313, . Martínez, E. et al. (2009). Extending the folksonomies of freesound.org using content-based audio analysis, in Proceedings of the Sound and Music Computing Conference, Porto. Schaeffer, P. (1966). Traité des objets musicaux. Paris: Editions du Seuil. Schafer, R.M. (1994). Our sonic environment and the soundscape: the turning of the world. Rochester, VT: Destiny Books.

6


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Order and Creativity in Virtual Worlds By Evan W Osborne & Shu Z Schiller, Wright State University

Abstract Economies are driven by dynamic creativity, but some sorts of creativity, especially if predatory, can destroy an economy. This tradeoff has been known for centuries to political philosophers who have analyzed physical space, but it has not been addressed in virtual space. Like physical economies, virtual economies face the tradeoff of encouraging freedom to experiment, while discouraging experiments that damage society. Physical societies solve this problem both through encouraging competition and by giving government the unique power to punish destructive activities. In virtual societies, this tradeoff has yet to be adequately assessed. Guided by the economic modeling of order and creativity, in this paper we discuss two types of behavior, constructive and destructive, to provide some guidelines for establishing limitations on the freedom of action of virtual-economy participants.

Keywords: order; creativity; virtual world; economy; governance.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research - Order and Creativity in VWs 4

Order and Creativity in Virtual Worlds By Evan W Osborne & Shu Z Schiller, Wright State University

In his masterwork Leviathan, Thomas Hobbes writes: The only way to erect such a common power, as may be able to defend them from the invasion of foreigners, and the injuries of one another, and thereby to secure them in such sort as that by their own industry and by the fruits of the earth they may nourish themselves and live contentedly, is to confer all their power and strength upon one man, or upon one assembly of men, that may reduce all their wills, by plurality of voices, unto one will: which is as much as to say, to appoint one man, or assembly of men, to bear their person; and every one to own and acknowledge himself to be author of whatsoever he that so beareth their person shall act, or cause to be acted, in those things which concern the common peace and safety; and therein to submit their wills, every one to his will, and their judgements to his judgement. This is more than consent, or concord; it is a real unity of them all in one and the same person, made by covenant of every man with every man, in such manner as if every man should say to every man: I authorise and give up my right of governing myself to this man, or to this assembly of men, on this condition; that thou give up, thy right to him, and authorise all his actions in like manner. This done, the multitude so united in one person is called a COMMONWEALTH. (Hobbes, 1972, p. 227). The problem of how to organize social authority has preoccupied many of the greatest social thinkers in cultures around the world for thousands of years. Throughout most of this history, the creation of a new society was a matter either for abstract models of societies founded in a state of nature or a question for historians investigating societies from the past. We could look into the results of a society’s founding or we could create an abstract conception of what its founding might have been like, but we could seldom observe the creation of a new society. But thanks to modern information technology, we now can. People around the world now routinely create self-contained societies, importing all the features of the human condition from outside – conflict, commerce, loyalty, betrayal, and more. Such societies are created on the platform of the internet, which has the ability to bring many people together regardless of their physical locations. New technologies, including the much increased bandwidth and speed of the internet and powerful computer systems, have now enabled the creation of much more sophisticated online societies, including virtual worlds. Virtual worlds can be defined as technology-created 3-D, graphically detailed, and highly interactive environments that incorporate representations of real world elements such as human beings, landscapes, and other objects (Kock, 2008). People participate or “live” in virtual worlds in the form of their avatars, a digital representation of an individual in either human or nonhuman form. As of 2007, there were more than 100 virtual worlds on the internet, taking various forms and with different purposes (Barnes, 2007). Our study focuses on “real virtual worlds”

4


Journal of Virtual Worlds Research - Order and Creativity in VWs 5

such as Second Life that feature the 3-D3C factors (3-D, community, creation, commerce) defined by Sivan (2008),1 as opposed to gaming societies such as World of Warcraft (WoW). Commerce is a substantial component of and catalyst for human activities in these virtual worlds. Buying and selling in virtual currency is very common and often encouraged. For instance, in the first quarter of 2009 in Second Life, resident-to-resident transactions reached $120 million (that number acquired from Second Life’s official blog at https://blogs.secondlife.com/community/features/blog/2009/04/16/the-second-life-economy-first-quarter-2009-in-detail). In addition, many virtual worlds are peer-created communities where people can build, give away, sell, or trade items with any other resident, just as with property (intellectual or otherwise) in physical space. But despite their technological trappings, these societies are made up of humans who bring their virtues and flaws with them. The question of how to order a virtual society is in many respects similar to its physical-world equivalent. This topic has never been explored in depth in the information-systems literature. Given the popularity of virtual worlds and their promising role in practice, it is critical to understand the mechanisms of these self-sustaining societies. We believe that the study of governance in physical space can benefit from thinking about how it occurs in virtual space and vice versa. Taking an economic perspective and confining our attention to theory rather than empirical analysis, in this paper we focus on one particular question – that of the proper tradeoff between order and creativity. The insights provide a useful complement to Duranske (2008), who focuses on the implementation of physical-world law, itself shaped by centuries of political theory in virtual space. Here, in contrast, we are interested in whether that theory may suggest the development of different principles for governance in virtual worlds. We are making a positive argument rather than a normative one, so that when we speak of what virtual-world owners “should” do, the argument is made with respect to the needs of profit maximization. We begin by setting out the key issues in Sections 2-4 before investigating in Sections 5-8 the ways in which governance in virtual space, because of its differences from physical space, is likely to be correspondingly different. Order and Disorder

The question of the proper balance of order and liberty is an ancient one. Hobbes depicted the state of nature absent government as a war of all against all and took the side of order by arguing that the state must be given absolute power to maintain it. For others, such as Locke (1986) or Bastiat (1996), the state itself is not to be trusted with excessive power because that power will be used in destructive ways. It is possible, if not inevitable, that even wellintentioned rules will create unintended consequences that the rule-drafters did not predict, which induce the rulemakers to react with ever-more complicated rules in a futile attempt to achieve the desired outcome, at tremendous consequence to both individual autonomy and social viability. The source of this problem, as the economist Friedrich von Hayek (1994) noted, is that planners know so little about the details of the world they govern that their clumsy rules inevitably cause people to react in unexpected ways, frustrating the planners’ goals. The need to 1

Teen Second Life is restricted to teens aged 13-17. Its highly restrictive and protective policies and unique profiles of its users make it a special case not generally relevant here.

5


Journal of Virtual Worlds Research - Order and Creativity in VWs 6

conform to or the cost of evading the planners’ rules means that creative activity by individuals in possession of knowledge about particular opportunities, knowledge that is invisible to the planner, is stifled. In the limit, this ever-increasing control culminates in the catastrophe of totalitarianism. Thus, while a Hobbes might assert the need for a powerful state to prevent predatory behavior, a Hayek would emphasize the destructive effects of state control on individual freedom and creativity. There is therefore a compelling tradeoff between the order paradoxically necessary to enable creativity and the power that destroys it. This tradeoff exists within the specific realm of economic creativity as well. On one hand an agent needs the freedom to experiment – to create a new business (or other social experiment) without restraint. A controlling authority, even a well-intentioned one, may impose so many rules on starting entrepreneurial ventures and on how they are run once they are established, that business costs will be cripplingly high. Fewer activities, even potentially promising ones, are undertaken, and society is poorer and less dynamic. On the other hand, the entrepreneur requires enforced order to a degree – his property rights must be protected, she must have a court system so that the contracts she enters into can be enforced, and so forth. He may even benefit if the government enforces various kinds of protections against unintentional harm, so that his customers have the confidence to do business with him. Production and Destruction These are the problems that governments in physical space face all the time. And in virtual worlds they are fundamentally the same, though different in some of the particulars. Some virtual worlds such as World of Warcraft and the Sims are purely gaming environments, while others, such as Second Life and Active Worlds, are developed for entertainment and commercial purposes (virtual commerce or virtual business). We focus here on these latter types of societies. Such virtual worlds, which are as full of commercial activity as any physical society, allow users considerably more creative freedom than games. There are no pre-plotted scenarios, avatars do not normally die or lose their lives, and as noted above, these virtual worlds allow creation of content by their residents who, subject to modest limitations, own the intellectual property rights to it. They are worlds in which individuals choose their pattern of interaction, with (in contrast to physical space) few institutional and geographic constraints written into the code by the worlds’ creators. Like human society in physical space, such worlds are unpredictable and constantly evolving – they become whatever the users collectively build. For instance, in Second Life, all content is created by its users except for some standard objects provided in the default library repository of “structures.” The ability to create in this way in virtual worlds, and the value such creativity has to users, is the fundamental reason why governance, which can excessively or insufficiently restrict individual creativity, is a balancing act. In virtual worlds, too little creativity limits peer-creation activities and thus makes a world uninteresting and therefore unprofitable, while too much makes it unpleasant or dangerous to the avatars who use it because they are victims of other avatars, either by design or accident. It is useful to distinguish between two kinds of activity: productive and destructive. Productive activity, through voluntary cooperation with other actors, leaves all who choose to work together better off – in physical space, such profit or not-for-profit activities as opening and operating businesses, or creating new cooperative social institutions such as a Boy Scout troop or

6


Journal of Virtual Worlds Research - Order and Creativity in VWs 7

a bowling league. Destructive activity leaves at least one participant in the activity worse off.2 And there are two varieties of destructive behavior – intentional and unintentional. Intentionally destructive (ID) behavior has the goal of forcibly limiting others’ options, often by trying to seize their wealth – robbery, war, lobbying the government for special benefits unwillingly or unwittingly funded by other taxpayers, and others. Unintentionally destructive (UD) activities make someone worse off if certain contingencies happen, even though a seller may have (perhaps unreasonably) expected they would not. For example, in the physical world, selling medicine the seller knows to be ineffective but claims is safe is ID, selling food with ingredients purchased from the lowest-cost supplier despite their being subject to poor quality control may be UD, although from the buyer’s point of view the effects are the same. Similarly, ID and UD activities are seen in virtual worlds. Examples of ID behavior there include a “griefer” assaulting another avatar (a concept clearly analogous to physical-world assault) or the coding of malicious scripts into seemingly benign objects such as a bouquet of roses. On the other hand, virtual banks may fail, taking the savings of participants with them, a form of UD behavior – the bank was not founded with the intent of destroying savers’ deposits. This exact phenomenon led to a decision by Linden Labs in January 2008 to prohibit any business from offering “interest or any direct return on an investment,” a decision we discuss further below. The trick for the designer of a virtual world is how to maximize the welfare of its residents knowing that some residents will engage in either variety of destructive behavior. Modeling the Order-Creativity Tradeoff A way of thinking about the problem is to imagine first that a (physical or virtual) world’s governing authority has the choice of two regulatory regimes, High or Low. In a Low regulatory regime, there are no limits on individual freedom, while in a High regulatory regime, many activities are prohibited or regulated in the name of order. Assume that the world has two agents (agents 1 and 2), who have the choice of devoting their resources to constructive, ID or UD activity. Figures 1 and 2 show potential distributions of income among the two agents in an economy. The curves AA’ and BB’ in Figures 1 and 2 represent two levels of potential income distribution among the two agents (“income” is used in its broadest economic sense – not just the proceeds of salaried labor, but the returns to providing any good or service that is valuable to someone else). The curves represent the Pareto frontiers of each economy – the set of all combinations of income x1 and x2 that make it impossible to make either agent better off without making the other worse off, the standard economic definition of efficient operation of an economy. Note that this definition of efficiency makes no statement about the desirability of a particular distribution of income. At point A, for example, agent 2 has all the income while

2

Note that effective competition is not intentionally destructive activity. While it may make other competitors worse off, they have the opportunity to choose to compete on terms that their customers prefer but choose not to. The gains from competition to participants in exchange (including successful competitors) exceed, by the first theorem of welfare economics (Varian, 1992), the losses to the non-participants who fail to win customers. This is why, in the Anglo-American legal system, competition is not a tort (Posner, 2007). Note also that the notion of destructive behavior here is static, and is unrelated to Schumpeter’s (2008) concept of “creative destruction,” referring to the continuous dynamic remaking of an economy through entrepreneurial activity. Such activity will exist in virtual worlds as surely as in physical space, and is as beneficial there.

7


Journal of Virtual Worlds Research - Order and Creativity in VWs 8

agent 1 has none, and there is no way to make 1 better off without making 2 worse off. But the usefulness of this concept of efficiency is that it allows us to say clearly that starting from any point in the interior of the curve, at point a, for instance, there are points to the northeast, moving toward the frontier, that are superior because they make both agents better off. The closer the particular combination of incomes is to the frontier, the more efficient the economy is. This concept of efficiency also allows illustration of the costs of destructive activity. Destructive activity moves us further away from the frontier. Suppose agent 2 grows food, and agent 1 makes clothes. If it is relatively easy for each agent to steal the produce of the other, then some of the resources otherwise devoted to producing food and clothing are instead devoted to protecting their crops and clothes from theft by the other agent – by buying weapons and building walls, for example. If theft is sufficiently lucrative, 1 and 2 devote so many resources to stealing (instead of concentrating on production) and defending their property against theft by the other agent that much less food and clothing are produced. But if the ruling authority can effectively enforce punishment against theft, making it more costly, agents 1 and 2 have an incentive to produce more and steal less, moving them from point a in the northeast direction, toward the frontier, within the dotted lines in the figure. In the limit, if enforcement against theft is perfect, 1 and 2 will end up on the frontier, somewhere (depending on the relative productivity of each agent) in the bolded section of the curve. What differentiates AA’ and BB’ is that in the economy subject to productive possibilities BB’ there are more restrictions on the ability of participants to engage in different kinds of activities. In the economy with potential production AA’ agents are free, for example, to start virtual banks without obtaining a license from the world’s authorities (or without being required to participate in a mandatory deposit-insurance program, the analogue of the Federal Deposit Insurance Corporation in physical space), or in physical space to use ingredients that have not passed government safety inspections, and so forth. If in a virtual world the banks are solvent or a partner’s code is non-malicious, or if in a physical world the food ingredients are safe, then actual income will be somewhere along AA’, which usually exceeds the outcomes along BB’, in which various regulations of individual creative freedom do exist. AA’ is a world with more potential income, because the government does not limit the ability of the farmer and the clothier through costly regulations. But potential income is not the same as actual income. Potential income is eroded by both types of destructive behavior. The theft of clothing, or the selling of tainted food, means that the actual outcome will be below the frontiers, at point a (with incomes x1A and x2A) in the Low world and point b (with incomes x1B and x2B) in the High one. The frontier BB’ represents a society that tries to control this loss through regulation. It is a function of such regulation to limit the movement below the curve, while keeping the curve itself as high as possible, but such regulation means that potential income, because of compliance and enforcement costs, is less, so that BB’ lies inside AA’. A less regulated economy, in other words, raises potential income but may or may not raise actual income for each agent. While AA’ and BB’ thus represent the set of potential incomes for each agent – i.e., the set of possible outcomes if there is no destructive activity – the distances between a and AA’ in the Low world and between BB’ and b in the High world each represent the loss of income due to destructive activity in that world.

8


Journal of Virtual Worlds Research - Order and Creativity in VWs 9

Figure 1. A world where more freedom is preferable.

Figure 2. A world where less freedom is preferable.

In Figure 1, actual income in the Low world (low regulation) at a is higher for both agents than at b in the High world (high regulation). But in Figure 2, the losses from destructive activity are so great without high regulation that the actual income for each party in the Low world is much closer to the origin and considerably worse than the outcome with a High regime. In this case, substantial limits on social experimentation are justified despite their negative effects on potential income. The simple model illustrates the classic argument between those who believe in strict law enforcement and enforcement of traditional cultural patterns and those who believe in a more liberal approach in the pursuit of progress (for more analysis, see Raeder, 1997). Intentionally Destructive (Id) Behavior The question of interest becomes whether virtual worlds, compared to the physical one, are better served by a Low or High regime. There is no way to answer for certain, but the model suggests some guiding principles. First, ID activity should generally be policed to the extent possible. The model indicates that all ID activity moves the participants in any virtual world away from the Pareto frontier. It is true that entrepreneurs in response to extensive griefing may develop anti-griefer tools, which generates wealth for them and (critically, taking for granted the existence of this amount of griefing) for the purchasers of their products. But it is still true that

9


Journal of Virtual Worlds Research - Order and Creativity in VWs 10

griefers are on balance wealth-destroying, for the same reason that burglary is wealth-destroying despite the fact that it generates a demand for burglar bars or that breaking windows is wealthdestroying despite the fact that it generates demand for window-repair services. It is wise for virtual worlds to police purely predatory activity to the extent the technology allows. So-called “griefers” mimic physical-world vandalism, assault, and homicide, and (also in imitation of physical-world behavior) frequently do it through organized gangs, with command structures, division of labor and meticulous planning. Their efforts are often profoundly resented by other virtual world users. This is why firms such as Linden Labs take them so seriously. Dibbell (2008) offers an account of the constant war between griefers, their victims, and the owners of virtual worlds. In fact, virtual worlds often use tiered freedom to limit potential ID behavior. For instance, an island owner can make his island open to anyone, or private, limited to those with permission to enter. If an avatar behaves badly on a private island, the owner can ban the avatar from coming, temporarily, or permanently. Another type of restriction can be imposed on avatars through group affiliations and titles. A group owner is given the right by an island owner to recruit new group members and to give them different classifications, such as member or guest. A guest group member might not be able to view certain content or obtain certain items created by the group members. Such a tiered structure is a way to control ID behavior in virtual worlds, and is similar to management of property in physical space – the bar manager, for example, who is empowered by the owner to expel a disruptive customer. Constant vigilance against such actions will be a requirement for the success of virtual worlds, all the more so because of the ease with which people can exit virtual as opposed to physical-world societies. Migration among virtual worlds is a little-studied phenomenon, although Castronova (2008) has discussed migration of human activities from physical to virtual space. If a resident of a country in physical space is threatened by widespread violence, her options are sometimes limited to self-defense rather than migration. She may hire security guards or place defense mechanisms in her home, but the high degree of society-specific investments she has made in herself (mastering the local language rather than a foreign one, understanding local business culture but not that of a foreign land) combined with the cost of uprooting her household and moving to a foreign land make migration comparatively difficult. Movement from one virtual world plagued by ID behavior to another where it is much better controlled is, in contrast, a relatively simple act. The control of ID behavior is therefore likely to be a key requirement of successful virtual worlds. The user who in one society must constantly defend his avatar or island is likely to strongly prefer the world where the world’s creator instead effectively does this job for him, just as individuals in physical space prefer societies with law and order to those where they must rely primarily on themselves for defense. Unintentionally Destructive (Ud) Behavior and the Value of Experiential Variety The challenge arises with UD behavior. Should the attitude of the owners of a virtual world, absent the intention to defraud (fraud being ID behavior), be one of caveat emptor? Or should they maximize the freedom to experiment by their participants, even at the cost of more UD activity? An answer to this question is suggested by the role of variety in virtual worlds. We believe that the primary attraction of virtual worlds for the consumer is their astonishing variety

10


Journal of Virtual Worlds Research - Order and Creativity in VWs 11

and creativity. Variety in physical space is valuable to consumers, though only up to a point. Consumers like to have more kinds of cars to choose from, but too many choices can become paralyzing, as recent research suggests (Botti & Iyengar, 2006). But in virtual worlds diversity of experience is often the goal itself. The proper comparison for the value of variety in virtualworld design is not to a consumer having difficulty choosing from among several dozen different kinds of toothpaste, but to a person who enjoys traveling and wants to visit as many countries as possible. A facilitator of variety in virtual worlds is their low-cost material and resources. Players or avatars are able to obtain many items for free or build them with very low investment. For instance, a collector may be able to obtain a copy of a virtual racing car for free in Second Life, compared to paying large amounts of money for an actual or even replica car in the physical world. Such low costs foster the exchange of goods and increase the value of variety in virtual worlds. The role of diversity in virtual worlds has been explored before. Castronova (2006) invokes the economic model of club goods to describe virtual worlds. A club good is a good that is public, in that benefits can be provided simultaneously to many members but is also subject to crowding costs when too many people use it simultaneously. More participants can be better for the user because more variety makes the product more enjoyable, but too many participants make the club undesirable, for search-cost (it is too difficult to find a good trading partner) or infrastructure-cost reasons. A country club with too few members is one where opportunities to socialize are limited, but a club with too many members is one where it is difficult to reserve time on the golf course, because building enough courses to accommodate so many users with reasonable waiting times would be prohibitively expensive. The former effects are known as network effects, in which the bigger the network of participants, the greater the opportunity for valuable exchange and interaction. The latter effects are crowding costs, the difficulties that arise, either from search costs or overuse of the club’s resources, from too many members. Castronova argues that as the number of participants increases from zero, virtual worlds benefit from having more players for a time, but crowding costs eventually mean that adding players makes the world less desirable. Note that the crowding costs are not simply the claim on computer time from more users, which can be addressed by the purchase of more processing power and memory, but the actual occupation of virtual space by avatars – the problem, to borrow from the baseball player Yogi Berra’s famous remark, of the island that is so crowded that no one goes there anymore. But we believe that in the worlds under study here the network effect will dominate. While games built around specific achievements and experiences – combat games, for example – may quickly be subject to crowding, games built around social interaction are much less so. For such worlds the variety of potential experiences cannot help but make the experience more attractive, subject to two qualifications. First, the interactions must be primarily productive rather than destructive. Few if any residents participate in virtual worlds in search of more variety in assault by griefers. Second, there must be a technology making it easy to seek out new experiences and to store and retrieve enjoyable ones. If these conditions are met, interaction in a virtual world is not like consumption of such physical objects as cars or food, where decisions are often driven either by a desire for durability or by habit. While an observer just arriving from another planet might marvel at the dozens of breakfast cereals that the consumer in a typical supermarket has to choose from, the average consumer chooses relatively few of them over time. In part this is a function of the quality provided by known brands – a consumer may not wish to risk low quality from a producer he does not know and thus continues to consume the same

11


Journal of Virtual Worlds Research - Order and Creativity in VWs 12

brand rather than be adventurous and try another. In addition, many physical products are not bought often enough for variety to be a compelling trait compared to reliability. But in a virtual world like Second Life, visiting many different islands is key to the attractiveness of the experience. Variety is costly to manufacture, but this effect is much more dramatic in physical than in virtual space. Often producing new varieties of physical-world products is costly, requiring a multitude of resources unnecessary in the virtual world, such as electric power, manufacturing equipment, advertising slots, etc. These resources are much more meaningfully scarce than creativity , which is, because of the low cost of computer processing and storage, the key ingredient in virtual worlds.. Recalling that ID behavior, no matter how creative, should always be controlled, what makes UD behavior problematic in physical space is that competition is relatively ineffective as a remedy. But because of the ease of producing variety, competition is more powerful in virtual space than in physical, and thus it is more likely that the losses to UD behavior in a Low world (low regulation) will be outweighed by the much greater potential income. Part of the reason that a bank failure is more problematic in physical than virtual space is that there are relatively few banks in the former, because banks are difficult to start there. Banks in physical space are less limited by this constraint and the fact that it is easier to start one suggests erring on the side of creativity rather than regulation. It is true that Linden Labs recently took the extreme step of banning such financial institutions, but we wonder whether such a response is excessive. It is undoubtedly true that some perhaps significant portion of banking activity in Second Life, as with any kind of economic activity anywhere, was ID. But while ID behavior can and probably should be prohibited (e.g., by banning banking fraud in a virtual world), this does not indicate that an entire economically useful activity should be banned. Such a recognition of the power in virtual worlds of constructive activity is all the true if (as seems likely, since it is so frequent in physical space, where the costs are higher) people in virtual space develop systems for rating the quality of various services (e.g., banks) offered there. This effect is even more enhanced by the non-arbitrary dictatorship that is likely to prevail in most virtual worlds. In physical space, governance occurs through both more dictatorial and more consensual systems. It is not obvious that a non-consensual ruler, such as a hereditary monarch rather than an elected president, is intrinsically hostile to human happiness. The key issue is not the fact that a dictator is a dictator, but what it is he dictates. If rule is by ironclad custom or otherwise made predictable and non-arbitrary, citizens may still be free to pursue their interests. Dictatorial rule that nonetheless leaves substantial room for individual autonomy within expansive limits, such as took place in nineteenth-century Austria-Hungary or in British-ruled East Asia (Sowell, 1994), might be preferable to democratic societies where the rules – who is permitted to do what, what government services are provided, and who pays for them – oscillate wildly from one government to the next. And virtual worlds are dictatorships, but profit-maximizing ones. The owners set the rules for interaction and social experimentation, but everyone knows what the rules are and knows they are likely to be stable because ownership of the rulemaking power will not change much, and because the ruler’s goals – profit maximization – are transparent. Political instability – that is, instability in what the rules for social experimentation and interaction are – is a major deterrent to creative activity. Worlds run strictly for profit may have rules that differ substantially from those in physical space, but they will nonetheless be stable, and hence will lend themselves to more creative experimentation.

12


Journal of Virtual Worlds Research - Order and Creativity in VWs 13

In short, in virtual space both demand and supply favor the creation of variety. Less regulation of activity that might be UD allows for more activity that will in the end be constructive, while the losses to UD activities are also minimized relative to physical space by the features of virtual space. Note finally, however, that these arguments are less compelling in the case of virtual worlds designed for the young, where variety that is constructive for adults may be destructive, sometimes even intentionally so, for children. An Example of Facilitating Constructive Activity in Virtual Space – Intellectual-Property Rules To summarize, controlling ID activity enhances wealth, but regulating UD activities increases the cost of constructive activity. We believe that virtual worlds will (and should) ultimately be characterized by the promotion of such constructive behavior by taking advantage of opportunities to improve upon arrangements that are inevitably problematic in physical space. Some confirmation of the tilt toward creativity and against restricting it can be found in the intellectual-property rules of Second Life. Note first that intellectual-property protection, particularly copyrights and patents, is in physical space a tradeoff. The granting of a copyright or patent is legal recognition of a monopoly right. Monopolies charge higher prices and produce less compared to a competitive market, and so this monopoly grant is costly. However, if innovations are costly to create but cheap to copy once someone else has incurred this cost, the incentive to create without intellectual-property protection is severely diminished. To see how these issues are handled in virtual space, consider excerpts below from the user agreement of Second Life: Users of the Service can create Content on Linden Lab's servers in various forms. Linden Lab acknowledges and agrees that, subject to the terms and conditions of this Agreement, you will retain any and all applicable copyright and other intellectual property rights with respect to any Content you create using the Service, to the extent you have such rights under applicable law. Notwithstanding the foregoing, you understand and agree that by submitting your Content to any area of the service, you automatically grant (and you represent and warrant that you have the right to grant) to Linden Lab: (a) a royalty-free, worldwide, fully paid-up, perpetual, irrevocable, non-exclusive right and license to (i) use, reproduce and distribute your Content within the Service as permitted by you through your interactions on the Service, and (ii) use and reproduce (and to authorize third parties to use and reproduce) any of your Content in any or all media for marketing and/or promotional purposes in connection with the Service. You also understand and agree that by submitting your Content to any area of the Service, you automatically grant (or you warrant that the owner of such Content has expressly granted) to Linden Lab and to all other users of the Service a nonexclusive, worldwide, fully paid-up, transferable, irrevocable, royalty-free and perpetual License, under any and all patent rights you may have or obtain with respect to your Content, to use your Content for all purposes within the Service. You further agree that you will not make any claims against Linden Lab or against other users of the Service based on any allegations that any activities by either of the foregoing within the Service infringe your (or anyone else's) patent rights. 13


Journal of Virtual Worlds Research - Order and Creativity in VWs 14

The first feature of the agreement worth noting is that in any virtual world, physicalworld copyright law does not cease to hold. Anyone who writes a song and incorporates it onto her island in a virtual world still holds all legal rights to the song that she possesses in physical space in her country. Whether copyright can be meaningfully enforced in virtual space, particularly given that companies may incorporate anywhere and the identities of those who appropriate copyrighted material may be harder to trace, is a separate question. But apart from these exceptions, Second Life allows any resident to click on an object and learn the rules on distribution and modification that the creator has attached to it. That the creator can define such rights so easily is the key point. Avatars in Second Life have the ability to create almost any digital content – a table, a tree, a store, or even a whole city. Such content is owned by the creators, who can make copies of, sell, or give it away. For instance, the popularity of fashions for avatars has led many people to open fashion stores in Second Life. Clothes, accessories, and even body shapes and skins are created and put on sale by the owners. The incentive to create such things is diminished if the owner cannot control re-use or modification. Such control can be motivated by emotional satisfaction as much as a desire to make money. But Second Life uses technology to vest the creator with a near-absolute intellectualproperty right that the physical world can only crudely duplicate through such tactics as copyrights and patents. Physical-world enforcement of intellectual-property rights generally involves uncertainty over such questions as whether an invention is truly novel, or whether fair use should govern the reproduction of a book excerpt. Such questions often create expensive litigation, and new technology generates new issues that may take years to resolve in the courts, creating delay that may retard innovation further. But Linden Labs has used technology to create a near-perfect property right for objects, songs, and other creations, with the only limitation being the ability of other residents to circumvent the Second Life code that allows creators to set the rules for use of their creations. This means that Linden Labs itself generates the property right, which is defined, without the need of courts or cease-and-desist letters, in near-absolute terms. If a creator wishes to use someone else’s creation as the raw material for his own, he simply negotiates an agreement with the owner, pays the agreed-upon price and then has complete access to it. Intellectual property in Second Life (with the exception of the prohibition on taking creations out of Second Life into another virtual world) duplicates the theoretical ideal of economic models of intellectual property, and thus maximizes the creativity that physicalworld laws can only imperfectly promote. This is unsurprising, given that the monopoly costs of intellectual-property rights are lower in an environment such as Second Life, assuming that consumers desire variety and that the creation of variety is easy. Transparency UD activities are still costly, although they are a negative side effect of an activity that may be on balance beneficial. How are they to be policed? Transparency is the key requirement. Transparency here refers to the ease with which users can obtain information, financial and otherwise, about the partners they contemplate doing business with. In physical space this is accomplished through both public and private means. The former include such reporting and monitoring agencies as the Securities and Exchange Commission, as well as the policing of fraud. The latter includes such devices as standards set by the accounting industry and groups, such as Consumers Union, that test products for reliability. Virtual worlds should

14


Journal of Virtual Worlds Research - Order and Creativity in VWs 15

make it easy for any user to access the history and reputation of any commercial enterprise, perhaps through such tactics as establishing (or importing from physical space) accounting standards that its enterprises must adhere to and make readily accessible, or allowing (and making easy) the creation of ratings from other enterprise customers. Since users will be able to create on their own a wide variety of assessment or vetting methods for virtual businesses, the world’s owners need only not to prohibit the creation and use of such methods. We predict, because of the ease of search in virtual space, that the development of such ratings systems will become a common feature of virtual worlds, and perhaps even a substantial money-making opportunity in its own right.

15


Journal of Virtual Worlds Research - Order and Creativity in VWs 16

Bibliography Barnes, S. (2007). Virtual worlds as a medium for advertising, The DATA BASE for Advances in Information Systems, 38 (4): 45-55. Bastiat, F. (1996). The Law. Foundation for Economic Education. Originally published in 1850. Botti, S. & Iyengar, S. S. (2006). The dark side of choice: When choice impairs social welfare. Journal of Public Policy and Marketing, 25 (1): 24-38. Castronova, E. (2007). Exodus to the Virtual World: How Online Fun is Changing Reality. New York: Palgrave Macmillan. Castronova, E. (2006). Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press. Duranske, B. T. (2008). Virtual Law: Navigating the Legal Landscape of Virtual Worlds. American Bar Association. Dibbell, J. (2008). Mutilated furries, flying phalluses: Put the blame on griefers, the sociopaths of the virtual world. Wired Magazine, 16.02, January 18, 2008. Retrieved July 27, 2009, from http://www.wired.com/gaming/virtualworlds/magazine/16-02/mf_goons. Hayek, F. V. (1994). The Road to Serfdom. Chicago: University of Chicago Press. Hobbes, T. (1972). Leviathan. Hazell, Watson & Viney Ltd. Originally published in 1651. Kock, N. (2008). E-collaboration and e-commerce in virtual worlds: The potential of second life and world of warcraft. International Journal of e-Collaboration, 4 (3): 1-13. Locke, J. (1986). The Second Treatise of Government. Macmillan. Originally published in 1690. Posner, R. A. (2007). Economic Analysis of Law. 7th edition. New York: Wolters Kluwer. Raeder, L. C. (1997). The liberalism/conservatism of edmund burke and F. A. Hayek: A critical comparison. Humanitas 10 (4): 70-88. Schumpeter, J. A. (2008). Capitalism, Socialism, and Democracy. New York: HarperPerennial, Originally published in 1942. Sivan, Y. (2008). 3D + 3C = “Real� virtual worlds. Cutter IT Journal, 21 (9): 6-13. Sowell, T. (1994). Race and Culture: A World View. New York: Basic Books. Varian, H. A. (1992). Microeconomic Analysis. 3rd edition. New York: Norton.

16


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Content Level Gateway for Online Virtual Worlds By S. Van Broeck, M. Van den Broeck and Zhe Lou Alcatel-Lucent, Belgium

Abstract This paper focuses on protecting future virtual worlds from the familiar but often irritating spam, pop-ups, adware and other web-based nuisances. Over the last decade, the internet had a profound effect on how we work and how we arrange our personal and professional lives. Along with the many advantages, people are also burdened with inefficient parental control, spam, popups, viruses, adware, spyware and more. Virtual worlds available to the general public will obviously be accessible via the Internet, just like the Web pages people are visiting today. Virtual worlds will therefore also be faced and have to deal with similar dangers and negative influences that users and administrators are experiencing today but now manifesting themselves as 3-D models, avatars, textures, animations, or any other type of content commonly used by virtual worlds. We propose a solution to guard our future internet from such counterproductive content. Keywords: Policy control; virtual worlds; MPEG-V; spam; OSG.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Content Level Gateway for Online VWs 4

Content Level Gateway for Online Virtual Worlds By S. Van Broeck, M. Van den Broeck and Zhe Lou Alcatel-Lucent, Belgium

Along with the many internet advantages, people find themselves overwhelmed with inefficient parental control, spam, pop-ups, viruses, adware, spyware and other web “junk.� All these elements are working against productivity and against enjoyment of the internet and need to be counterfeited by specialized software like spam filters, firewalls, adware filters and others. Market revenues in this area reach in the billion of dollars. Virtual Worlds (VW) are posing themselves as the future of internet. Today, hundreds of VWs already exist, each addressing a target group. As such, there are VWs that are specifically created for educational and training purposes, others focus on travel, social networking or gaming, and still others target communities like corporate environments or young children. It is this mix of VWs and the seamless interoperability between them that represents the threedimensional (3-D) internet of tomorrow. People in the future will be able to go into virtual showrooms, watch the car of their dreams in their favorite color, enter inside, activate the controls, talk to other visitors, and make a deal with a sales person. They will be able to make a virtual trip, take personalized courses with hands-on exercises on virtual models, and have meetings and perform tasks in virtual settings that are not possible in the real world. These VWs will have to deal with the same negative influences as the internet of today, only now these distractions will be presented in a different package. Such packages can be 3-D models, avatars, textures, animations, scripts, sound, or any other type of content commonly used by virtual worlds. People will be subjected to aggressive, violent, annoying, sexual, intimidating, or other offending content. This paper proposes a solution to guard our future internet from the very start from such counterproductive content. Related Works Current filtering systems operate on specific protocols including Hyper Text Markup Language (HTML), Simple Mail Transfer Protocol (SMTP), and Internet Message Access Protocol (IMAP). Several standardization initiatives are addressing this topic by defining vocabularies, categories, or semantics describing types of content. As such, the World Wide Web Consortium (W3C) maintains the Platform for Internet Content Selection (PICS) recommendation for HTML content and the Family Online Safety Institute (FOSI) maintains the Internet Content Rating Association (ICRA) standard. Other organizations make use of these standards to filter content based on policies set by a central or supervising authority. As such, companies can implement a company-wide policy for access to HTML content. One such policy control is the eXtensible Access Control Markup Language (XACML) from the Organization for the Advancement of Structured Information Standards (OASIS).

Certain content, however, may not be tagged or may be tagged incorrectly. Therefore, a number of complementary actions can be taken, including black and white lists, content analysis 4


Journal of Virtual Worlds Research- Content Level Gateway for Online VWs 5

optionally complemented using data mining or machine learning techniques, statistical data compression models, or user feedback statistics. A lot of effort is spent to identity every single peace of content on the web and to prevent inappropriate content from reaching certain target groups. To some extent, mentioned techniques can be re-used in VWs. These can, for example, bar certain users from accessing a particular VW much like barred access today to certain internet sites. This solution does not, however, provide a fine-tuned control where an authority is able to set a policy on certain content within the VW instead of on the complete VW. Additionally, the structure of a VW is quite different from the structure of a web site and therefore requires also adapted techniques. Today, most VW have specific solutions build into their VW platform. As such, Linden Lab’s Second Life is using a combination of (1) access rights on object level and (2) categories on land or parcel to refrain certain individuals from accessing certain content. In Instant Messaging Virtual Universe, persons with an adult pass (AP) can see naked persons while persons without such AP will see them with standards clothing. Such solutions are platform specific, do not address interoperability between different platforms, and therefore do not support 3rd party policy control management over content spanning several VWs. Research has been undertaken to define methods tailored to screening certain content for a specific environment. Such methods can be used to modify or replace certain objects for certain users. As such, a religious person may be presented with a decently dressed person instead of the scarcely dressed one. These are however also specialized solutions often incorporated in the VW platform itself. System administrators having to deal with all these VWs will either have to deal with the specifics of each of these platforms or have to ban access to the complete platform. In the internet of tomorrow, where numerous VWs will need to interwork, a more fine-grained solution is needed where an authority can create a single policy for content handling applicable to all VW platforms. Solution VWs consist of content that is inherently different from the content filtered by current filtering applications. Billboards may have textures that show violent behavior, models may have sexually offending animations attached, and automated avatars or bots may annoy people by offering all kinds of merchandise. To safeguard people from these disturbances, a policy mechanism should be put in place that can be configured by a central authority on behalf of individuals or organizations and that is applicable for all VWs. Similar to current internet policy management, we propose to locate this functionality in access equipment like routers and gateways.

5


Journal of Virtual Worlds Research- Content Level Gateway for Online VWs 6

Figure 1: An example of an offending texture within a virtual world.

In order for such policy to work, every type of content should be labeled. Such labels can be part of the content alike the PICS format, but they can also be located separate from the content in semantics alike the ICRA format. As an example of the first one, the extendable COLLAborative Design Activity (COLLADA) format could be extended to hold PICS information on a per element basis. The second approach implements a separate set of semantics, using technologies such as the Resource Description Framework (RDF) language, that link uniquely to the content and that can reveal information about the content. In either approach, models, textures, animations, and any other elements can be labeled with the type identifier for use by the policy system. Most, if not all, VWs are organized in a hierarchical way. One such library providing a hierarchical structure is the OpenSceneGraph (OSG). Access to the element’s hierarchy level as programmed in the OSG could also be used by the policy control. Indeed, once an element is encountered with content type that is rejected by the current policy, all dependent elements can also be refused. The COLLADA format could be extended or the streaming protocol updated to include the current hierarchy level of the element. Once the type of content can be discovered by a central authority, policies can be set up in a Policy Management Point (PMP) to intercept certain content according to the user’s preferences. In case of interception, a specialized application or rule engine can decide on what actions to take next. In the figure below, a functional overview of the solution is given. In the drawing, five different options for countermeasures are given in case a policy rule is violated and therefore a non-compliant condition encountered.

6


Journal of Virtual Worlds Research- Content Level Gateway for Online VWs 7

4 3

Virtual Environment Platform

Asset Flow

Content Gateway

2 1

Asset Flow

Virtual Environment Client

5

Yes/No? Assets Assets Assets

Assets Assets

Assets Assets

Figure 2: A functional overview of content control.

One of the actions that can be taken by the gateway (arrow 1 above) is to simply abort the application, to discontinue all further content, to remove all intercepted content, or to remove all intercepted content and all their dependents. Simply removing objects from VWs may lead to conditions where the VW scenery becomes incomplete, where the application logic may not be able to function correctly anymore, or where inconsistencies are created between different clients that have a different set of policies for the same VW. The religious person mentioned previously may thus see the scarcely dressed person differently from other people. In order to overcome such barriers, adequate autonomous removal or replacement of elements requires quite some understanding of the VW platform and application logic. To remedy the simple approach described above, autonomous backward communication with the VW platform presents another viable option. Arrow 2 illustrates this behavior. As such, the VW platform can take corrective actions like sending back adapted content and also distributing these corrections to other clients. When a child enters a virtual room, all offending content may as such be replaced instantly for all the people already present in that room. Intelligence in the gateway may also autonomously replace content received from the VW platform into other content. For instance, as indicated by arrow 3, non-compliant textures may simply be replaced by a specific and well recognizable default texture much similar to the text a browser shows when a non-compliant page is opened. As indicated by arrow 4, intelligent VW platforms may also decide to first retrieve the policies in place for a certain client from the Policy Information Point (PIP) so that it can already adapt the VW in accordance with the policy settings. In such case, the intermediate Policy Execution Point (PEP) will continue to operate on the incoming content but will most probably never need to intercept. This third approach also allows every individual VW application’s logic to implement corrective measures that best match their logic. As such, one VW platform may, for example, decide to disallow the user from visiting certain places while another VW platform may choose to simply substitute the violating content with an acceptable alternative.

7


Journal of Virtual Worlds Research- Content Level Gateway for Online VWs 8

As for current existing solutions, the described policy control can be extended with black and white lists for VW platforms as well as for individual elements within the VW, user feedback on VW platforms and elements (see arrow 5), algorithm-based content inspection, statistical information, or any other existing means to help identity the type of content. For example, textures may be screened to find nudity, models and animations can be analyzed in search of obscenity, and scripts can be evaluated to discover misbehavior. It may prove beneficial to introduce a general replacement strategy for elements that get rejected by a policy and where the VW platform can or does not take corrective actions. In case of animations, the violating animation could be replaced by an animation that reflects refusal. In case of textures, a generalized texture could be defined indicating the violation to the user. In order for the policy to execute the screening of the content, it must have guarded access to the content. Therefore, it may be necessary for VWs to agree on security measures with the policy execution point. Both parties may use a Public Key Infrastructure (PKI) to secure their communication or decide to simply make use of secure protocol layers like Secure Sockets Layer (SSL). Conclusion

To safeguard the future of the internet, where many different VWs will co-exist and will have to interoperate with each other, a policy based control is needed to safeguard the users from unwanted or malicious content. This paper proposes a solution where all VW content can be labeled and based on this label, screened by a policy authority. In case of violations: (1) several simple autonomous corrective measures that can be taken, (2) a communication means with the violating VW to allow the VW to take corrective measures, and (3) a communication means for the VW to anticipate violations by retrieving the policy before streaming. On top of this policybased mechanism, non policy-based existing measures and means to block unwanted content remain applicable. This paper has been written in scope of the Information Technology for European Advancement 2 (ITEA2) Metaverse1 project that is in charge of MPEG-V (Moving Pictures Experts Group for Virtual Worlds) standardization.

8


Volume 2, Number 3 Technology, Economy, and Standards October 2009

On the Creation of Standards for Interaction Between Robots and Virtual Worlds By Alex Juarez, Christoph Bartneck and Lou Feijs Eindhoven University of Technology

Abstract

Research on virtual worlds and environments has increased tremendously in the last decade, giving birth to a variety of applications spanning over several areas such as virtual reality, human-computer interaction, psychology and sociology, among others. In this paper we elaborate on one issue affecting the areas of virtual worlds and robotics: the lack of standard mechanisms for communication and interaction between virtual worlds and robots. We contribute to the scientific community our thoughts on the possibility of creating a standard platform that enable the seamless interaction between these heterogeneous, distributed devices and systems. Hopefully, these ideas will turn, in the future, into applications that not only address the challenges in communication, control and interoperability of such systems (robots and virtual worlds), but also help to improve the standard of life of people through tangible products and services.

Keywords: virtual worlds; robots; robotics; standards; communication and interaction.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research - On the Creation of Standards for Interaction 4-

On the Creation of Standards for Interaction Between Robots and Virtual Worlds By Alex Juarez, Christoph Bartneck, and Lou Feijs Eindhoven University of Technology

Research on virtual worlds and environments has increased tremendously in the last decade, giving birth to a variety of applications spanning over several areas such as virtual reality, human-computer interaction, psychology and sociology, among others. Nowadays it is common to see humans of all ages subscribing to and using virtual worlds, an online representation of reality of the likes of those encountered in popular internet applications like Second Life (www.secondlife.com) or IMVU (www.imvu.com). In these virtual worlds, humans can form communities and establish bonds with both avatars and other real people. Even more, the interaction is reaching levels where the real and virtual worlds merge: in “real-life� virtual items can be purchased on eBay and immediately be used in the virtual world. In a similar way, appliances and toys like the Nabaztag (www.nabaztag.com) can detect events occurring in the virtual world and communicate them to their owners in the real world, showing a synergy that allows virtual and real agents to become essential parts of our lives. One promising area of application for this kind of interaction is robotics. Traditionally, robots have been used to help humans in labor intensive and hazardous work, as research subjects or simply, as means of entertainment. The development in robotics has reached a high level of sophistication that can be easily appreciated in the many complex, precise and accurate manipulators, autonomous mobile platforms, surveillance and rescue vehicles, insect-like and humanoid robots available, either commercially or as research prototypes. Yet, robots and robotics in general face a major challenge: to reach the masses. Many interesting and inspiring robotic projects do not reach media and public attention due to expensive components, poor performance on highly complex environments of operation, tight IP agreements, or simply because of bad marketing strategies. The massive and growing popularity of virtual worlds is a characteristic that allows to showcase real robotic agents in challenging environments, showing their features in a collaborative setup, bringing them to mainstream attention and, even more importantly, reaching potential customers directly. Furthermore, virtual worlds allow us to test new robotic platforms in circumstances that most popular simulation tools lack: a highly interactive, non-deterministic, socially affected, close-to-reality environment, where the robot is able to show its true potential. The social presence of a robot can also be increased with its inclusion in virtual environments. For example, a service robot that is able to connect to a virtual world can guide children or the elderly to interact and communicate with other people in the virtual environment, while monitoring them both in their real and virtual lives. This adds a social dimension to the task of the robot, making it useful to minimize loneliness, improve health and social care, and even providing some affection in the process (Nourbakhsh et al., 1999).

4


Journal of Virtual Worlds Research - On the Creation of Standards for Interaction 5

In addition, a robot that is designed, controlled and tested in a virtual environment offers the possibility of physically distant researchers to contribute to the creation of new prototypes in a more constructive, efficient and cost-effective way. The same environment can be easily used to commercialize the product by presenting it to potential customers in countries spread around the world, all at a fraction of the traditional investment in sales and marketing. The open nature of virtual environments, continuously connected to the internet, offers a huge potential to make the product known to larger audiences than those previously reached via more traditional advertisement mechanisms—and with a significant reduction in the associated costs. It would be naive to say that the current level of development of virtual worlds offers a substitute for more traditional ways of developing, testing, commercializing and using a product. However, the rapid growth of virtual and mixed reality and the increasing interest of the research community and the general public can turn it into a viable economic alternative with which compete in a globalized world. In the next sections we present our thoughts on some of the current challenges that this research area offers, along with ideas on how to overcome them. In this paper we contribute to the scientific community our thoughts on the possibility of creating a standard platform that enable the seamless interaction between real robots and virtual worlds. Hopefully, these ideas will turn, in the future, into applications that not only address the challenges in communication and interaction between such systems (robots and virtual worlds), but also help to improve the standard of life with tangible products and services Fast Pace Development, Technical Isolation and Standardization We believe that the exciting research and commercial opportunities offered by the integration of real robotic agents and popular virtual reality environments are hindered by the lack of standardization in the interaction between them. The fast pace of virtual worlds and robot technology development add a further aggravating component, which makes standard communication and interaction mechanisms more of a necessity than a simple feature of these systems. Initial efforts in this area have tried to integrate tangible robotic spaces (a real robot and its surrounding environment) with a virtual world focusing on multiple-user robot control through avatars (Syamsuddin et al., 2008). Other approaches investigate the effects of social interaction and cooperation between humans and robots in scenarios that simulate reality, but are impractical to replicate in the real world (e.g. a simulation of potentially unsafe situations that can arise when humans and robots interact in a home environment) (Prattichizzo, 1999). These approaches, however, are mostly technically isolated from one another, in the sense that the mechanisms that allow the interaction between the virtual environment (i.e. simulators, virtual worlds, etc.) and the real agents have been constructed in ad-hoc manner using heterogeneous technologies and, in some cases, neglecting the possibility of a conventional platform for their integration. In synthesis, most existing approaches do not concern themselves with one fundamental question: is it possible to build a common platform that allows the seamless integration (to a certain degree) of heterogeneous robotic hardware and virtual environments, such that the sensors and actuators can be monitored and controlled across software and hardware platforms? 5


Journal of Virtual Worlds Research - On the Creation of Standards for Interaction 6

We are convinced that it is not only possible, but necessary to produce such platforms that will allow the “next step" in the fusion of virtual and real worlds. Moreover, this platform can easily turn into a benchmark that allows researchers and industry to compare and judge the quality and performance of different hardware and software available. Building a Standard for Communication and Interaction Between Virtual and Real Worlds In order to build a standard for the interaction between real robots and virtual worlds, several challenges must be addressed: •

Determine the virtual worlds and robotic hardware that are suitable for standardization. With innovative robotic systems appearing almost every week, and virtual worlds evolving at a rapid pace, the ideal of producing a platform that allows us to interconnect any robot within any environment, is extremely difficult, if not impossible. There is a need, then, to determine which are the appropriate hardware and software on which to base a standard for connection and interaction. Some of the characteristics that these components must agree upon are their public acceptance, industry/research community support and the technology used to build/produce them.

Develop a software platform that allows the monitoring and control of sensors and actuators. Such a platform must allow the connection (ideally, in a `Plug-n-Play` fashion) of heterogeneous robotic hardware with several heterogeneous virtual worlds. It must also allow for transmission and visualization of monitoring and control information between the virtual reality and the real agent, as well as the appropriate security mechanisms that make for the safe operation of the real machines.

Integrate the three components (virtual worlds, communication/interaction software and robotic hardware) into a cohesive and robust structure. Reliability and consistency are critical issues in an application that is networked by nature. Real time and information transmission issues also come into play when building a software platform that must be functional, but at the same time, usable. Conclusion

Virtual worlds offer exciting opportunities for robotics, however they are currently hindered by the lack of a common platform where the heterogeneous robotic hardware and the different virtual environments available can integrate. We believe that the creation of a standardized mechanism for communication and interaction between real robots and virtual worlds is a crucial step in the development of the next generation technology and applications where robots can show their true potential. More concretely, this will allow us to build a general platform that can be used as a benchmark where researchers and industry can test and evaluate different software and hardware available. We also anticipate that further development of this technology will provide interesting mechanisms to develop and test new products –in particular, robots - ensuring their usability, acceptability and reliability in different areas of application such as medical and health care, telerobotics, augmented and mixed reality. 6


Journal of Virtual Worlds Research - On the Creation of Standards for Interaction 7

Finally, the introduction of this technology into everyday life will allow the end user (the grandmother that lives alone at home, or the child that wants to meet with his friends living many kilometers apart) to experience a new form of social interaction: they will not isolate at home but instead they will be able to communicate to a real “friend,” a robot that can assist them. In many cases this will result in a direct improvement in quality of life for many people. For example, for elderly people struggling with loneliness or illness, a robotic device can be used as a proxy to guide them in a journey through virtual worlds where they meet family and make new friends. At the same time, the robotic device can monitor their health and make sure that appropriate response is given in case of any emergency. This is a critical capability, as noted by T. G. Holzman (1999): “Quality medical care depends on prompt, accurate recording, communication, and retrieval of patient data […] In emergency medicine, such information can make the difference between life and death” (p. 13, Holzman, 1999).

7


Journal of Virtual Worlds Research - On the Creation of Standards for Interaction 8

Bibliography Holzman, T. G. (1999). Computer-human interface solutions for emergency medical care. Interactions ACM Journal, 6(3), 13-24. Nourbakhsh, I.R., Bobenage, J., Grange, S., Lutz, R., Meyer, R., and Soto, A. (1999). An affective mobile robot educator with a full-time job. Artificial Intelligence, 114(1-2), 95–12. Prattichizzo, D. (2009). Robotics in Second Life. IEEE Robotics and Automation Magazine, 16(1), 99-102. Syamsuddin, M.R., Mayangsari, M.N., Juasiripukdee, P., and Kwon, Y.M. (2008). Trying to integrate ubiquitous robotic space and metaverse. In Proceedings of the Workshop on Virtual Worlds, Collaboration, and Workplace Productivity (CSCW08), San Diego, California.

8


Volume 2, Number 3 Technology, Economy, and Standards October 2009

An Experiment in Using Virtual Worlds for Scientific Visualization of Self-Gravitating Systems Will Meierjurgen Farr, Massachusetts Institute of Technology; Piet Hut, Institute for Advanced Study; Jeff Ames, Adam Johnson, Genkii

Abstract In virtual worlds, objects fall straight down. By replacing a few lines of code to include Newton's gravity, virtual world software can become an N-body simulation code with visualization included where objects move under their mutual gravitational attraction as stars in a cluster. We report on our recent experience of adding a gravitational N-body simulator to the OpenSim virtual world physics engine. OpenSim is an open-source, virtual world server that provides a 3D immersive experience to users who connect using the popular “Second Life” client software from Linden Labs. With the addition of the N-body simulation engine, which we are calling NEO, short for N-Body Experiments in OpenSim, multiple users can collaboratively create point-mass gravitating objects in the virtual world and then observe the subsequent gravitational evolution of their “stellar” system. We view this work as an experiment examining the suitability of virtual worlds for scientific visualization, and we report on future work to enhance and expand the prototype we have built. We also discuss some standardization and technology issues raised by our unusual use of virtual worlds. Keywords: scientific visualization; simulation; OpenSim; n-body.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research - VWs for Scientific Visualization 4

An Experiment in Using Virtual Worlds for Scientific Visualization of Self-Gravitating Systems Will Meierjurgen Farr, Massachusetts Institute of Technology; Piet Hut, Institute for Advanced Study; Jeff Ames, Adam Johnson, Genkii All too often visualization is an afterthought in physics simulation. Producing proper visualization tools is complicated (often more complicated than producing the simulation to be visualized) and uninteresting from the standpoint of physics. Here we present a simple example of a visualization system for N-body gravitational dynamics built on top of the virtual world system OpenSim (The OpenSim Developers, 2008) an open-source version of the software used in Second Life. From the point of view of an astrophysicist dealing with gravitational N-body simulations, virtual worlds such as OpenSim are N-body simulators, with two extra features: a surprisingly elaborate graphics module and a bug in the equations of motion. As to the latter: whereas objects should attract each other via Newton's inverse-square law of gravity, objects in OpenSim fall straight down. However, that “bug” is easily fixed. We have done so and we discuss our first results in this paper. Our visualization system, NEO, or N-Body Experiments in OpenSim, runs within the OpenSim server. We allow users connected to the server to designate objects within the virtual world as “physical.” Physical objects interact gravitationally as point masses. A small amount of modified code in the OpenSim physics engine tracks the motion of physical objects under their collective gravitational forces. OpenSim displays their motion along with the other objects and users in the virtual world. OpenSim provides facilities for users to communicate with each other using text or voice, allows them to trade files or in-world objects, and allows easy creation and manipulations of in-world objects. With a few hundred lines of code added the OpenSim physics engine, we have a 3D collaborative visualization system for experiments with point-mass gravitating systems. The OpenSim Platform OpenSim is an open-source C# program which implements the Second Life virtual world server protocol. Running the popular Second Life client software from Linden Labs, users can connect to a computer or grid of computers running the OpenSim server and enter a virtual world. Within this world users can share media—text, pictures, and video—interact with each other via text chat and voice communication, and create and share 3D objects in the world itself. The interaction occurs in real-time: at each moment, every logged-in user views the current 3D state of the server world. Different users can manipulate this state simultaneously. The 3D, interactive nature of the OpenSim virtual world makes it an ideal substrate for collaborative visualization of scientific results and simulations. The “hard” parts of collaborative visualization unrelated to the science—interactivity, 3D display, controls, etc.—are handled by the pre-existing OpenSim engine, leaving scientists free to focus on the best way to represent their scientific data within the virtual world.

4


Journal of Virtual Worlds Research - VWs for Scientific Visualization 5

Our gravitational simulation code, which we will discuss more fully in the next section, lives in the physics engine of OpenSim. In “vanilla” OpenSim, the physics engine is responsible for tracking the positions and velocities of primitive objects and users, implementing effects such as falls, tumbles, and collisions. The server requests the positions and velocities of all objects under its control from the physics engine ten times every second; clients wishing for a larger frame-rate use the velocities of the objects to extrapolate their positions at intermediate points. The Newtonian Physics Engine The physics engine of OpenSim handles the updating of the positions and velocities of all objects and avatars in the virtual world. Though velocity information is not strictly necessary for a client to render a scene, it is used by the client to extrapolate the positions of prims and characters between updates from the server. The standard physics engine is extremely simplistic. Prims are divided into two classes: physical and unphysical. Physical prims feel the effects of a uniform gravitational field (that is, they fall straight down just as physical objects do on Earth), while unphysical prims simply move in straight lines with constant velocity. Both types of prims can collide with other solid objects. We have modified the standard physics engine of OpenSim using a plugin. Server administrators can select to replace the standard physics engine with our plugin at serverinitialization time, region by region. (In OpenSim different servers correspond to different regions in the virtual world; administrators can choose to employ our plugin on a region-byregion basis.) The modified physics engine treats each physical prim as a gravitating point-mass in space; other objects are handled by dispatching to the standard physics engine. We have implemented a variety of integration algorithms for time-advancing the resulting gravitational system in the Newtonian Physics engine: the Hermite algorithm (Makino, 1991) (the default), kick-drift-kick and drift-kick-drift leapfrog, and the GL3 algorithm (Farr & Bertschinger, 2007).1 The current simulation is rudimentary. We choose units so that G = 1, M = ∑ mi = 1, and i

the total energy of the system is E = − 1 . (These are the so-called “standard units” (Heggie & 4 Mathieu, 1986). In these units, the average inverse pair-wise separation in an equal-mass system rij = 1. The pair-wise gravitational potential is softened to prevent extreme twobody interactions that destroy the accuracy of the integrator:

of bodies is

V (r) =

1

m1m2 r2 + ε2

,

where ε = 4

N ; N is the number of bodies in the system. The softening ensures that the maximum two-body interaction potential, Vmax ≈ m1m 2 = 1 , is of the same order as the ε 4N E typical equipartition kinetic energy of a body T ≈ =1 in equilibrium. Softening N 4N 1

For an introduction to writing N-body code, see Hut & Makino, 2009, especially “Moving Stars Around.” For general background concerning self-gravitating systems, see Heggie & Hut, 2003 and for background concerning Nbody algorithms, see Aarseth, 2003.

5


Journal of Virtual Worlds Research - VWs for Scientific Visualization 6

prevents any individual encounter between two bodies from changing the trajectories of either body too much, greatly simplifying the implementation of the simulation. In these natural units, the typical time for a body to cross from one side of a system to the other is of order unity. The size of the system is also of order unity. Using these dimensionless units, there is no need for conversion between “server time” and N-body time and “server length” and N-body length, so the user can see the system evolve on realistic time- and lengthscales. (For example, the two-body relaxation timescale for a system with N ~ 30 is about t cr N ~ 250 seconds of real time.) We can simulate about N ~ 50 bodies in this fashion 0.1lnN on typical modern desktop hardware before the server cannot keep up with the necessary frame rates to the connected clients. Though 50 bodies is small by modern simulation standards, such a system is sufficient to illustrate most of the physical behaviors important in larger systems before core collapse—evaporation, two-body relaxation, mass segregation, etc. We could increase the maximum number of bodies that can be simulated by not demanding that the simulation and display remain synchronized at the cost of introducing significant complexity in the code. Examples This section, as an example, presents screenshots of an interaction in OpenSim simulating about 30 bodies starting from a cold initial condition. In Figure 1, the avatar sets up an initial condition by creating a group of objects (by holding “shift” while dragging the movement bars, a pre-existing group of bodies can be copied). Figure 2 captures the system a moment after the avatar has selected the “Physical” box; very soon after (on the free-fall timescale, which is of order one second in the simulator), in Figure 3, the two groups of bodies in the initial condition quickly collapse, forming the two clumps visible in the figure. The simulation ends after about a minute of simulator time (a few tens of crossing times) in Figure 4, with a collapsed, nearlyspherical cluster and a few almost-ejected stars in loose orbits.

Figure 1: Establishing an initial condition.

6


Journal of Virtual Worlds Research - VWs for Scientific Visualization 7

Figure 2: When the user selects the "Physical" box, the system is scaled into standard units and the simulation begins.

Figure 3: Two groups of bodies in the initial condition collapse into two clumps on the free-fall timescale.

7


Journal of Virtual Worlds Research - VWs for Scientific Visualization 8

Figure 4: After a few tens of crossing times (a few tens of seconds real time), the system settles down into a spherical cluster. Some nearly-ejected stars can be seen orbiting the central mass of the cluster.

Though this example shows only one avatar in view on a remote “desert island,� a similar simulation could, in principle, take place anywhere on an OpenSim grid, and any user present could collaborate to construct the initial conditions, discuss the outcome with other avatars, save data from the simulation, etc. Limitations and Future Work This section discusses some of the limitations of the current simulation engine, and highlights future work that promises to resolve them. The biggest limitation of the current engine is the size of the simulations it can run. A system of 50 bodies is sufficient to illustrate the phenomena that are important in physically relevant simulations, but to study physical systems, simulations must be much larger. The cost of a simulation of a quasi-equilibrium cluster of gravitating bodies over an evolutionary timescale grows approximately as N 3 ; it is not reasonable to expect to perform physically relevant simulations in real-time on a virtual world server. To address this limitation, a collaboration between the National Institute for Informatics and the National Astronomical Observatory of Japan (The AstroSim Project, 2009) is preparing a visualizer that allows users to re-play a simulation conducted on a more powerful computer inside a virtual world; this harnesses the speed advantages of specialized hardware (e.g. the GRAPE-DR Project, 2008) for the simulation, and the collaborative advantages of virtual worlds for the visualization. Even with the size limitations inherent in the server-based simulation engine we describe, it can still be an useful tool for education and enhanced understanding of the microphysics of self-gravitating systems. It would be more useful for these purposes, however, if it had the capability to start systems in more varied initial conditions. Currently, the system only permits 8


Journal of Virtual Worlds Research - VWs for Scientific Visualization 9

cold (i.e. zero velocity) initial conditions for systems of bodies that must be constructed by hand in the virtual world. Ideally, it would permit the specification of arbitrary initial conditions (perhaps via a notecard) and the quick creation of various analytically determined distributions of stars. Work is in progress to permit this. Finally, further control over the simulation would be desireable. At a minimum, one should be able to pause and restart the simulation easily—currently, this requires de-selecting the physical property for all bodies in the simulation). It would also be nice to add discrete physical events by hand (i.e. insert or remove stars from the simulation, change the mass of stars which “go supernova,” etc). Work is in progress to add a simplified control panel that appears in the virtual world. The work reported here has been carried out during the summer of 2008 in Tokyo at the National Astronomical Observatory of Japan. Since then, we have continued our work in collaborations that involve several co-workers at the National Institute for Informatics, also in Tokyo, and other co-workers whom we met with regularly in the Meta Institute for Computational Astrophysics (MICA) in Second Life (see http://www.mica-vw.org/; Djorgovski, et al., 2009a & 2009b; Nakasone, et al., 2009). Most recently, we are conducting a weekly workshop to discuss the use of virtual worlds for stellar dynamics, in collaboration between MICA and Kira (http://www.kira.org/; see http://www.kira.org/index.php?option=com_content&task=view&id=124&Itemid=154). Issues for Technology and Standardization Our unusual use case and implementation techniques for the N-body physics engine raise a number of issues related to technology and standardization. In this section, we attempt to identify some of these issues. We discuss these issues in the context of our N-body physics engine, but they would be relevant for any scientific simulation conducted in a virtual world. The data in our simulation are unusual for a virtual world. Instead of complicated, unmoving structures, we have simple structures executing complicated motion. How can we store the history of the bodies’ motion in the virtual world? Would it be possible to represent that history itself as an object in the virtual world? Could avatars trade N-body systems with each other? What about transfer of systems from one virtual world to another? If we wish to visualize multiple N-body systems, for example, while teaching a class, we need a way to ensure they don’t interfere with each other. We may also wish to verify that a simulation has really been isolated during its run, without the rest of the world providing additional influences on the motions of the stars. This may call for a way to isolate different parts of a virtual world from each other for a time to minimize the effects of one on the other. The simulations we are running can be arbitrarily demanding on the server CPU (to simulate N bodies takes time proportional to N 2 ). This allows for the possibility of an inadvertent overload on the server, which could in practice resemble a denial-of-service attack against the virtual world server. Should we just limit the number of objects an user is allowed to create and simulate? Degrade the quality of the simulation dynamically according to server load? Treat server CPU as a resource that users must request and manage specifically? 9


Journal of Virtual Worlds Research - VWs for Scientific Visualization 10

Our application is heavily customized, targeting only OpenSim. Currently, no standard interface exists for writing plugins to modify the behaviors of the common components of virtual worlds. Should such interfaces be standardized? What would such a standard look like? What would be the risks in opening up the infrastructure of the virtual world to external modification? Similarly, the procedure to install our application in an OpenSim instance is unique. Could this be standardized? If standardized, would a common installation procedure apply only to OpenSim, or for other virtual worlds as well? A standard plugin installation procedure could make these sorts of modifications available to more users, but most OpenSim server administrators are probably sophisticated enough to not be deterred by a slightly customized installation procedure. As discussed above, the poor scaling of the computational cost of an N-body simulation with the number of bodies probably requires that any large simulations be performed on a different computer than the virtual world server. The results of such simulations can be passed to the server, which can then provide them to the attached clients for presentation. However, even though the computational load on the server is minimized in this architecture, the data load could be considerable, particularly when many clients demand the simulation data. Should standards be created allowing a server to refer clients to another source for some of the data they are to display? What about a standard for clients sharing data among themselves to reduce the load on the server and the external data source? What are the possible security implications? This bandwidth problem is not unique to scientific simulation applications for virtual worlds, but such applications often deal with exceptionally large datasets and therefore the problem is relatively more important for these applications. While a great amount of thought has been put into technology and standards for the typical use of virtual worlds, the types of uses we discuss here are just beginning to be explored. For the reasons we discuss above, we think that the future holds great promise for the use of virtual worlds as visualization and simulation platforms. As such uses become more common the issues in this section—and others—must be addressed. Conclusion We have reported on our experience adding a gravitational N-body simulator to OpenSim. The simulator exists as a modification of the standard physics engine of OpenSim, and is notable for its simplicity. Nevertheless, the resulting simulation environment can “piggyback” on all the collaboration features of the OpenSim virtual world to provide a multiuser, interactive environment. We anticipate that this sort of rich collaboration is the future of scientific visualization and we argue that virtual worlds provide an ideal substrate on which to base such visualization systems. The work described in this paper has barely scratched the surface of the capabilities of such a system, yet provides a compelling example of the suitability of this approach for creating visualization tools by a quick retooling of the existing infrastructure of a virtual world. Acknowledgements Will Farr and Piet Hut express their thanks to the NAOJ for inviting them as visitors during the summer of 2008, when the work described here was carried out. We also want to thank Jun Makino for being our host during that period. In addition, we thank Helmut Prendinger and Ken Muria from NII for several stimulating conversations. 10


Journal of Virtual Worlds Research - VWs for Scientific Visualization 11

Bibliography Aarseth, S. 2003. Gravitational N-Body Simulations. Cambridge University Press. Djorgovski, S. G., Hut, P., McMillan, S., Vesperini, E., Knop, R., Farr, W. M., et al. 2009a (Forthcoming). Attracting Scientists to Virtual Worlds as a New Scholarly and Research Environment. Journal of Virtual Worlds Research . Djorgovski, S. G., Hut, P., McMillan, S., Vesperini, E., Knop, R., Farr, W. M., et al. 2009b. Exploring the Use of Virtual Worlds as a Scientific Research Platform: The Meta-Institute for Computational Astrophysics. In J. Sablatnig (Ed.), Proceedings of the FaVE 2009 Meeting. Springer Verlag. Farr, W. M., & Bertschinger, E. 2007. Variational Integrators for the Gravitational N-Body Problem. Astrophysical Journal , 663, 1420. Heggie, D., & Hut, P. 2003. The Gravitational Million Body Problem. Cambridge University Press. Heggie, D., & Mathieu, R. 1986. Standardised Units and Time Scales. In S. McMillan, & P. Hut (Eds.), The Use of Supercomputers in Stellar Dynamics (p. 233). Springer. Hut, P., & Makino, J. 2009. Retrieved April 2009, from The Art of Computational Science: http://www.artcompsci.org/ Makino, J. 1991. Optimal Order and Time-Step Criterion for Aarseth-type N-Body Integrators. The Astrophysical Journal , 369, 200--212. Nakasone, A., Holland, S., Prendinger, H., Makino, J., Hut, P., & Miura, K. 2009. ASTROSIM: Collaborative Visualization of an Astrophysics Simulation in Second Life. In Preparation . The AstroSim Project. 2009. AstroSim. Retrieved 2009, from GlobalLab: http://www.prendingerlab.net/globallab/?page_id=17 The GRAPE-DR Project. 2008. GRAPE-DR. Retrieved 2008, from http://grape-dr.adm.s.utokyo.ac.jp/ The OpenSim Developers. 2008. Retrieved 2008, from Open Simulator: http://opensimulator.org/wiki/Main_Page\

11


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Piracy vs. Control: Models of Virtual World Governance and Their Impact on Player and User Experience Melissa de Zwart, University of South Australia

Abstract Current models of governance of virtual worlds evolved from the Terms of Service developed by the virtual world content creators based upon intellectual property license models. Increasingly, however virtual world providers now seek to accommodate both the needs and interests of owners and users in order to respond to the evolving needs of the virtual world. However, domestic governments are also now taking greater interest in the activities within virtual communities. This article explores a range of governance models, and the competing interests at play within the virtual communities managed by such models, in order to consider whether there is a universally adaptable governance model. In particular it analyses the role and effectiveness of the Council of Stellar Management, the player representative committee in EVE. The article concludes that national governments should not impose significant regulation upon virtual communities, but rather should encourage the development and growth of such communities by prescribing minimum standards, such as standardisation and transparency of Terms of Service. Matters occurring within the virtual world environment should be dealt with in accordance with the established community norms and rules. Therefore, role play environments such as EVE should be allowed to encourage piratical and outlaw behaviour without offending domestic laws. Keywords: governance; virtual worlds; regulation; laws; standards.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Piracy vs. Control 4

Piracy vs. Control: Models of Virtual World Governance and Their Impact on Player and User Experience Melissa de Zwart, University of South Australia In February 2009 the region of Delve, long a bastion of relative peace and prosperity, became a savage battleground, as its sovereignty holders, the KenZoku alliance, struggled to restore order and authority following near disaster when their alliance was sabotaged from within. This sabotage occurred due to the defection of one of the most senior members of the Band of Brothers alliance, arguably the most powerful alliance in EVE Online, and the predecessors of KenZoku. The Band of Brothers alliance (BoB), consisting of an alliance of multiple corporations and involving thousands of players, held significant power in the EVE environment. Yet BoB was effectively destroyed when a director of BoB defected to Goon Swarm, the arch rival of BoB, taking with him resources, money, and equipment, and crucially, the ability to renew the alliance name, ‘Band of Brothers,’ which disbanded the alliance under the rules of the game. BoB reformed as the KenZoku alliance, but its long-held sovereignty over Delve was threatened and ultimately diminished, losing thousands of hours of player time and money invested in the alliance and the region. As devastating as this was for the members of BoB, these events were legal, in the sense that they were not in breach of the rules of EVE Online. In fact, the operators of EVE Online, CCP Games (CCP), celebrated the shakeup in territorial sovereignty, so much so that some suspected them of engineering it to disturb the entrenched balance of power. People unfamiliar with the EVE environment questioned why CCP Games did not step in to restore the alliance name and undo the damage caused by the defector. However, duplicitous, underhand practices are celebrated and rewarded in EVE. Among the various game play activities available in EVE are opportunities for corporations to engage in theft, assassination, ransom and piracy.1 While piracy is deemed ‘criminal,’ leading to negative personal security status, as the EVElopedia notes, “the criminal nature of the pirate is fully supported by CCP and the in-game mechanics.” (EVElopedia, http://wiki.eveonline.com/wiki) EVE hit the headlines again in July 2009, following the theft of 200 billion Interstellar Kredits (ISK) from Ebank (a major in-world bank holding at that time 8.9 Trillion ISK in deposits) by its CEO, causing a run on the in-world bank. (Thompson, 2009) Again, many asked how this could have been allowed to happen, although similar things seemed to be reported daily in the news regarding real life investment scams. Ricdic, the perpetrator of the theft, was banned from EVE for trading in-game currency for real world currency in breach of the EVE Online Terms of Service. This type of disruptive behaviour is unlikely to occur in a MMORPG such as World of Warcraft, where the operators of the environment maintain a much tighter control on game developments and story integrity. However, EVE’s owners deliberately foster a player-run universe where almost anything goes. Had Ricdic not sold the ISK outside of EVE, he would not have had his account cancelled and punishment for the theft would have been left to his fellow players. These events serve to highlight the difficulty in drawing clear line rules regarding what sort of conduct should be acceptable within virtual worlds, even with respect to conduct that in the offline world is plainly unacceptable, such as murder, theft, and embezzlement. It also highlights the difficulty of applying objective standards of conduct within online communities, which may have vastly different concepts of acceptable behaviour. CCP has deliberately created and sought to maintain an environment that fosters brilliant, yet underhanded and immoral, game tactics. Players who are not happy with this 1

See, for example, the September 2005 assassination of Mirial, CEO of the Ubiqua Seraph corporation, ‘Murder Incorporated’, PC Gamer, 29 January 2008, http://www.computerandvideogames.com/article.php?id=180867&site=pcg, accessed 14 July 2009. This act included the capture of 20 Billion ISK (Interstellar Kredits) of assets and destruction of assets worth a further 10 Billion ISK, at the time calculated at approximately $16,500 US.

4


Journal of Virtual Worlds Research- Piracy vs. Control 5

degree of lawlessness are forced to ply their galactic trade elsewhere. However, as will be discussed below, CCP still draws a clear distinction between in-world lawlessness and real world governance through a rigorous enforcement of the Terms of Service. As the size of online communities within gaming spaces continues to grow and the size of investment in such spaces grows with it, there will be increased interest from domestic governments in how best to regulate such spaces. This move toward regulation may not always take account of the needs or interests of the participants in such environments. In fact, there may be little understanding of the varying cultures of different MMORPGs and virtual social platforms, such as Second Life, and the inworld regulation that already occurs within such spaces. This article will examine the important influence of game design and game governance on the nature of the player’s experience. It will identify and analyze current governance structures and the interests of key governance stakeholders. Recognizing the increased call for in-world regulation and the impact that this may have on the player experience, it will conclude that any default rules developed for governance of virtual worlds will need to be sensitive to the community norms at play within that environment, reflecting the needs of all governance stakeholders. Drawing upon a number of examples, it will explore the need to acknowledge the particular nature of the world under consideration and discuss ways in which this may be respected and protected by particular governance arrangements. While this article will focus predominantly on online gaming environments, particularly MMORPGs, it will also draw comparisons with how governance issues will affect social virtual worlds like Second Life. It will consider the relationship between real world laws, inbuilt game standards, and the players’ own negotiated understanding of the world with which they are engaged and how this may change over time, according to gaming experiences and investment in the game world. This article will conclude with a reflection upon the relationship between the underlying governance structures of the virtual world and the developing nature of that world and make recommendations regarding the future pathway of law reform in this area.

Governance Structures Currently, the key tool for governance of MMORPGs is the End User License Agreement (EULA) or Terms of Service (TOS). This mode of governance derives from the fact that online environments are essentially creations of intellectual property and thus, are the copyright of the game designer. The clickwrap license, now the ubiquitous online contracting mechanism, evolved from the shrink-wrap license, used to facilitate software licenses in the days of off the shelf purchases of software. A clickwrap license enables the owner of the intellectual property product to license a user without individual negotiation of the terms. It is also an extremely powerful mechanism, as breach of the terms of the license may leave the person in breach liable for infringement of copyright as well as breach of contract. See, for example, the recent litigation between Blizzard, the owners and operators of World of Warcraft, and MDY Industries, the creators and distributors of Glider, a program which when used in conjunction with WoW facilitated automated play. The US District Court, District of Arizona held that use of Glider was outside the scope of the copyright license granted to users by the WoW Terms of Use, leading to the conclusion that use of Glider was a breach of copyright by end users and a breach of the license. This case will be discussed further below. Thus the key influences in governance of virtual worlds to date have been contract, largely based on issues of intellectual property ownership and use and content regulation, as governments seek to restrict content which is overtly sexual or violent (particularly in the US) or in breach of Human Rights guidelines as racist or demeaning (Europe). These are two important, but quite narrow, dimensions of community law making. As communities have become more complex, the TOS have been supplemented by a range of other policies and rules. In Second Life, for example, members are required to abide by the Terms of Service and Community Guidelines as well as various Linden Lab decrees which are issued from time to time in response to particular issues. For example, the ban on “broadly offensive content”

5


Journal of Virtual Worlds Research- Piracy vs. Control 6

and the ban on in-world banks were originally promulgated via the Second Life blog.2 These two changes were contrary to what many of the residents considered was the unfettered freedom that Linden Lab had originally promised them. This tension between community and controller is common as the platform matures and the developer seeks to accommodate the interests of the largest number of users (or possibly potential users) sometimes at the expense of the early adopters. Second Life had a flourishing community of role-players involved in a range of lifestyle and sexual practices, some of which would be considered offensive by many, such as the Gorean community, which bases itself on the writings of John Norman, and in which women are slaves. Such users felt they were free to explore their sexuality in the 18+ world of Second Life. However, the open display of such content and practices was not desirable in the more commercialized world of Second Life seeking to attract corporate sims. Rules can be implemented and enforced by the code of the gaming experience by, for example, prohibiting the player from engaging in certain activities. These have been described as the physics of the environment (Bartle, 2006). The game narrative can also shape the rules of the game. For example, in EVE, each race is imbued with certain qualities not possessed, or possessed to different levels, by the other races. These limitations are also coded into the gaming environment. Most rules are coded into the gaming environment to make the gaming environment more pleasurable to the player. A game that is too easy quickly loses its appeal. Richard Bartle, in particular, has argued for the rights of the game designer to retain a god-like authority over the environment in order to ensure that the integrity and hence the enjoyment of the gaming experience is maintained (2006). Of course, defining the precise scope of the rules of the game can mean different things to different people. As Mia Consalvo has analyzed at length, players define the gaming environment on a broad spectrum and therefore, the range of activities considered acceptable within that environment is equally diverse (Consalvo, 2007). Cheating has different meanings for different players. For some players, rules are only rules where they are enforced by code, meaning that circumvention of any rule that it is possible to break should not be considered a breach of the game rules. For others, the gaming environment and the code merely provide a platform for the exploration of functionality. As most MMORPGs and particularly social virtual worlds are designed to expand, there may be gaps left in the program design. These gaps are areas for exploration and creativity for those players so inclined. For players with a hacker orientation, exploitation of these spaces is part of the game. One of the most successful levels of governance in online communities is the observation of rules imposed by the community. In fact, the most influential rules may be those developed and enforced by the social contract of the community itself. This may be at a meta level or rules that are imposed and observed by smaller communities. As Humphreys (2008) has observed, game developers will frequently encourage players to self-regulate within the gaming world, coding the game in a way that trains and rewards players to engage in certain behaviour. Players who are disruptive to the established social norm are treated as outcasts and encouraged to leave the game world. A recent example of this is provided by the controversy unleashed by the “study” conducted in City of Heroes by David Myers, whose avatar Twixt, was ostracised by the City of Heroes community for a range of behaviour that was considered in breach of game etiquette. According to Myers (2008), Twixt engaged in three types of behaviour, which whilst legal under the rules of the game, were deemed unacceptable by the gaming community: teleporting enemy characters into a group of hostile non-playercharacters whereby the opponent is attacked by the drones and destroyed (“droned”); refusing to cooperate with players engaged in farming by engaging them in pvp combat; and refusing to participate in the social engagement of the game by engaging in solo play. All of these behaviours were the subject of 2

Second Life Blog ‘Keeping Second Life Safe, Together’, https://blogs.secondlife.com/community/features/blog/2007/06/01/keeping-second-life-safe-together, 1 June 2007, accessed 16 July 2009 and ‘New Policy Regarding In-World "Banks"’, https://blogs.secondlife.com/community/features/blog/2008/01/08/new-policy-regarding-in-world-banks, 8 January 2008, accessed 16 July 2009.

6


Journal of Virtual Worlds Research- Piracy vs. Control 7

extensive discussions on the public forums and the subject of personal abuse on the open channel. Myers’ paper has generated a great deal of controversy regarding his methodology; however, it does neatly demonstrate the potential consequences for a player whose view of the rules diverges from that of the majority and where the physics and indeed laws (in the guise of the EULA) leave scope for different interpretations, Myers’ main point is that he was playing in accordance with the rules of City of Heroes, while being punished by other players for not abiding by their gloss on these rules. As Fairfield (2008b) observes, “in player versus player actions in which norms conflict with EULA provisions, the norms often prevail.” For fear of regulatory intervention that may change the nature of the playing experience, virtual world inhabitants have generally been keen to insulate their worlds against regulation by domestic governments. However, this scenario changes when the virtual world operators themselves are at odds with their citizens. This can happen due to a shift in attitudes of either the community or the operators. For example, as noted above, Linden Lab, concerned about the negative press it was attracting and conscious of its desire to appear consumer friendly changed (or “clarified”) its policy regarding offensive content in the face of extensive media coverage focusing upon age-play within Second Life.3 This led to avatar campaigns for free speech in Second Life, drawing upon rights protected under the US Constitution, but clearly at odds with the commercial relationship created by the TOS which provides only a limited , fully revocable license to use Second Life while in compliance with the TOS. Whilst users of Second Life may view it as a “community,” it remains a commercial platform provided by a corporation that can prescribe the rules for the use of the platform. Another example is the promulgation in March 2009 of the ‘World of Warcraft User Interface Add-On Development Policy’ by Blizzard. (http://www.worldofwarcraft.com/policy/ui.html) After years of an ambivalent attitude towards add-ons and other mods, Blizzard announced that add-ons must be distributed completely free of charge and the programming code of add-ons must be publicly viewable. Again, this generated debate among the user community, some viewing it as perfectly reasonable, others as an infringement of their rights in terms of how to play the game. Of course, this tension reflects the long history of interplay between game developers and modders (Postigo, 2008). The problem for many game designers and operators is adjusting to the role of managers of a community rather than merely providers of content (Humphreys, 2008). The relationship between developers and users becomes a long-term one to be negotiated and managed, further complicated when users contribute to the virtual world environment through creation and investment of time and money. Virtual world owners may find they develop a troubled relationship with their players/ citizens who are of course, also their customers. That relationship ranges from elements of love and adulation for the gaming environment itself, to contempt and loathing, for their management style. This is an attitude common to many creative industries, for example, creators like George Lucas and Stephenie Meyer are equally adored for their creation of much loved characters and reviled for their subsequent development and treatment of those characters. What can be derived from this survey of governance structures? First, that the main tool of governance of these environments remains the EULA or Terms of Service, consented to in full, without modification, and generally without being read, by all users. While early adopters saw great promise in the ability to rule by contract, insulated from external laws, we can see that as a community evolves, tensions and disputes arise between the owner and the users, and between the users themselves. This leads to a desire to call upon external authorities, to, for example, settle a dispute with the service provider, when the EULA proclaims that the provider is god, such as litigation between Bragg and Linden, or between users, such as the copyright disputes that have plagued Second Life. Marc Bragg brought an 3

Second Life Blog, ‘Clarification of Policy Disallowing "Ageplay"’, https://blogs.secondlife.com/community/features/blog/2007/11/14/clarification-of-policy-disallowing-ageplay, November 2007, accessed 16 July 2009.

14

7


Journal of Virtual Worlds Research- Piracy vs. Control 8

action against Linden following the termination of his Second Life account and confiscation of his entire inventory, on the grounds that he had purchased an area of land in breach of the Second Life Terms of Service. Bragg alleged that he had been led to believe that in Second Life he would own all of the property he created and therefore those assets could not be confiscated by Linden without compensation. The case was settled without resolution of this issue. (Bragg v Linden Research, Inc, Memorandum and Order Denying Motion to Dismiss, Robreno J, 30 May 2007.) Several cases have been filed relating to allegations of copyright infringement between users of Second Life, see for example, Eros, LLC v John Doe and Eros LLC v Thomas Simon a/k/a Rase Kenzo. As Fairfield (2008a) has recently argued, the halcyon days of appeal to rule by contract are well behind us. Fairfield observes that contract is an incomplete mechanism for creating rights and obligations between members of online communities and that such communities will only reach their full potential when courts are prepared to read into such relationships default legal rules, such as those recognised by property and torts law. Contracts cannot anticipate and regulate all issues that may arise within the virtual community. Second, however, just as there is ambivalence towards the dictatorship of the service provider, so too is there ambivalence to allowing or inviting interference from real world laws. While real world laws such as those relating to free speech, discrimination, theft, and fraud may provide the promise of dealing with problems that arise in-world in a manner that is consistent and familiar to residents of the relevant jurisdiction, they are greatly disruptive to in-world events and practices. Real world laws bring with them a need to comply with constraints that the virtual world is ideally designed to avoid. The player created world of EVE could not exist if everyone had to observe the real world laws of whatever jurisdiction may be deemed to apply. So how may this be resolved? First, we should consider the stakeholders in this debate.

Governance Stakeholders Key stakeholders in the governance debate are real world governments, game or platform developers, and players or citizens. As in-world populations increase, domestic governments are developing a greater interest in regulation of virtual worlds. This interest stems from a number of grounds including taxation, money laundering, content regulation, and crime. Recent initiatives in this area include the Council of Europe Human Rights Guidelines for online games providers (2008) which outline standard guidelines to be taken into account by game designers and publishers in developing game content. These guidelines emphasize the need to take account of the impact on children of certain content, with particular reference to gratuitous portrayals of violence, content advocating criminal, or harmful behaviour and content conveying messages of aggressive nationalism, ethnocentrism, xenophobia, and racism. Interestingly, the guidelines specifically exclude social virtual worlds, such as Second Life. As Ren Reynolds (2009) has commented, the Guidelines reflect the common attitudes to these environments adopted by many real world governments: that the users of such environments are predominantly children who need to be protected against inappropriate content and that the users of such environments are essentially passive. Both of these assumptions are wrong and while these types of assumptions continue to reflect the understanding of real world governments and underpin their attitude to regulation of virtual spaces, there will be major difficulties for game providers. There is a need for education of governments regarding the true nature and diversity of these environments. Some recent work has been done in order to generate a better understanding of virtual worlds by governments. This includes the ENISA Position Paper Virtual Worlds, Real Money Security and Privacy in Massively-Multiplayer Online Games and Social and Corporate Virtual Worlds (2008), and the UK-OECD Workshop on Innovation and Policy for Virtual Worlds (March, 2009). While work thus far has been useful in awareness raising, further work is needed as there remains a lack of

8


Journal of Virtual Worlds Research- Piracy vs. Control 9

understanding of key differences between various gaming platforms and social virtual worlds, such as the suggestion, for example, that Second Life is a game.4 Not all virtual world environments are alike. Again, one of the key problems with adopting a clear and reasonable position on suitable forms of governance is that virtual worlds are very unalike. The interests of the citizens of Norrath are quite distinct from those of Second Life, Habbo Hotel, or EVE, but this very diversity must be respected by any attempts to regulate such environments. Some greater coordination may be needed between virtual social world and MMORPG providers in order to get key messages across. The second category of governance stakeholders is the game or virtual world providers. While developers and operators may be different with respect to some platforms, this discussion will adopt the term “platform provider” to refer to the entity in charge of managing the game or virtual environment. Gaming worlds generally reflect a much higher level of control than social virtual environments. For example, Blizzard claims copyright in all aspects of the World of Warcraft gaming environment. It maintains strict control over the range of characters that can be created, avatar names, and storylines. As noted above, it has tightened up its attitude towards mods and add-ons and recently won its long-running legal dispute regarding the use of Glider. Blizzard brought action against MDY, the creators and distributors of Glider, a program which facilitates automated play of World of Warcraft, alleging that MDY encouraged users to breach the Terms of Use, infringement of copyright and other breaches of the Digital Millennium Copyright Act (US). The Court held that the license granted to users by Blizzard to use the game software while playing World of Warcraft was limited by the other provisions of the Terms of Use and the EULA and that use of Glider in conjunction with the game software was a breach of that license. MDY was therefore liable as the party who had authorized or facilitated such breaches.5 Blizzard deliberately and consciously moderates and controls the World of Warcraft domain and despite many criticisms of the limitations of the interface, graphics, and roles, World of Warcraft remains the most successful MMORPGs in the English-speaking world. (Blizzard, 2008) City of Heroes is a gaming environment that is exploring greater player input to content creation. In February 2009 it announced that players would be able to create their own in-game stories and missions. (City of Heroes, 2009) However, the EULA provides that NC Interactive retains all intellectual property in such creations, “By submitting Member Content to or creating Member Content on any area of the Service, you acknowledge and agree that such Member Content is the sole property of NC Interactive.” Granting users the ability to contribute content to the gaming world also gives rise to questions of how to monitor and remove offensive and inappropriate content. This will be monitored and filtered by NC Interactive in a number of ways, reflecting the ongoing responsibilities of NC Interactive as a community manager (Morrissey, 2009). Therefore, in granting scope for user generated content, NC Interactive must assume greater monitoring responsibilities. Second Life styles itself as “an online, 3-D virtual world imagined and created by its Residents.” (http://secondlife.com/) Pursuant to its TOS, it purports to grant users intellectual property rights over their creations and relies upon users to create the environment and sustain the in-world economy through trade in goods and services. Second Life has its own currency exchange and permits the trade of items in and out of world. However, as noted above, it must still exercise some controls over content and conduct in order to maintain an environment which it can market to potential users. Some of the most flourishing areas of Second Life, however, take place behind closed doors, such as Gorean and other role-playing sims.

4

Second Life is not a game because it has no gaming object, no levelling up and no mandatory gaming narrative or role playing. However, many games do take place within Second Life. 5 MDY Industries LLC v Blizzard Entertainment, Inc, Order, Campbell J, US District Court, District of Arizona, No. CV-06-2555-PHX-DGC, 14 July 2008, and Order, Campbell J, US District Court, District of Arizona, CV-06-2555PHX-DGC, 28 January 2009. Note that this decision is currently on appeal.

9


Journal of Virtual Worlds Research- Piracy vs. Control 10

Clearly, each of these worlds would be attractive to different users, although one should not oversimplify the classification of environments, even along the spectrum of those permitting user-created content. While World of Warcraft does not facilitate user generated content, it is always possible to interact with and alter the playing environment, such as through the manipulation of in-world items. Further, World of Warcraft has attracted a thriving community of machinima creators. The main cause of dispute and disengagement between the users of the virtual environment and the providers is most likely a lack of transparency regarding the values sought to be protected and promoted by the platform provider. Of course, these can change over time. Burke (2004) discusses the tension that can arise from trying to serve the different needs of the ‘dedicated core of heavily-involved players or a wider array of more “casual” players.” He continues: Contradictory or at least divergent conceptions of the “public interest” in any given MMOG are promulgated by developer-sovereigns largely as marketing rhetoric and are thrown like scraps to antagonistic communities of citizens who then fight with each other to determine the ”true” foundational principles of the gameworld (2004). A successful platform provider must provide continuity and consistency in terms of game or virtual world ethos and philosophy. If they adopt a “hands-off” attitude, sudden intervention and rule making (or changing) will disrupt the user community and lead to disengagement. Those communities who have always lived under strict rules of the platform provider will likely feel less empowered to complain. Users who have invested the most time and effort in the environment, in particular through the generation of content, will feel that they have a particular investment in the community. Therefore, those environments that allow user generated content will need to take particular care in changing the rules and norms of that environment. The contribution of users to the development of the virtual world is therefore disruptive to traditional governance models (Humphreys, 2009). The continued creative efforts of both users and designers, within and outside the platform, mean that the scope of what is being governed is fluid, as Humphreys observes, the ‘text is never finished’ (Humphreys, 2008). Further, the assessment of the relative contribution of the users to the environment is also very difficult to measure. Greg Lastowka (2009) has recently undertaken an analysis of Norrath, the fantasy world where EverQuest is experienced, for the purposes of developing an understanding of its nature as a subject of legal regulation. He observes that like most MMORPGs, whilst Norrath provides an essential context for the narrative of the gaming experience, it lacks an end. The territory of Norrath is perpetually subject to war, with no overarching ruler. This context serves as a background for stories developed and experienced by the players’ own creativity. Added to this is the social dimension of the gaming experience, such as raiding, which is event rather than narrative. Clearly the players’ contribution to the world of Norrath is vital. But their contribution remains limited to the locations, creatures and narrative context created by SOE, the game provider. The final group to be considered in the governance triangle is the users themselves, again reflecting a diverse range of interests. This group will often feel a strong sense of “ownership” over the game or environment, having spent many years in some cases and a considerable sum of money as a subscriber. In those environments that permit user-generated content there may be an even stronger sense of ownership and entitlement. As Lastowka (2009) observes the law has “struggled to determine whether players are, in some sense, the “authors” of computer games.” This question remains unresolved, even in those virtual worlds which purport to give some rights of intellectual property ownership to residents, such as Second Life. It should also be recognized that the universe of the virtual worlds does not stop at the boundary of the game or universe itself. It is frequently expanded by fansites, blogs, fan fiction, discussion lists and websites, extending even to t-shirts, merchandise, and conventions. It is this engagement with the game or virtual world experience that enhances the sense of community and involvement. Again, most platform providers will encourage and support such activities, although many may restrict uses that are

10


Journal of Virtual Worlds Research- Piracy vs. Control 11

controversial. For example, Blizzard has developed a Fair Use Guide for machinima developers. How far the influence and control of the platform provider extends outside of the game or virtual world environment are dictated largely by the laws of intellectual property, regarding use of copyrights and trade marks. The extent of control exercised by the platform provider may be both a commercial and a legal one, and again must reflect careful relationship management between the owners and the fans. Players or avatars acquire rights only pursuant to the one-sided terms of use. Generally then they have no rights other than to use the platform whilst in compliance with the terms of service. A good governance structure should consider allocation of some rights to users that reflects their investment in the game or platform. Such a model is provided by Raph Koster (2000) in his “Declaration of the Rights of Avatars.” Koster’s Declaration explicitly recognizes, for example, that every member of the virtual community has the right to contribute to the shaping of the community’s code of conduct “as the culture of the virtual space evolves, particularly as it evolves in directions that the administrator did not predict.” Further, the administrator has a duty “to work with the community to arrive at a code of conduct that is shaped by the input of the community.” Koster has put these rights into practice in the Terms of Service of Metaplace. These provide users with rights including freedom of speech, reasonable processes to resolve grievances, and ownership of their intellectual property. However, even this model recognizes that these TOS do not suit all environments and a context-specific assessment must be made regarding the appropriateness of granting extensive rights to users. 6 The next section will consider a governance model seeking to grant users a voice within the virtual world or gaming universe, and compare this with other approaches.

Governance Models In-world governance models reflect deliberate choices made by the game or virtual world designers regarding how the environment will evolve. As discussed above, the world of EVE is largely player driven and the governance choices made by CCP reflect and support this policy. The introductory paragraph of the EVE Online “Suspension and Ban Policy” provides: Though we have made every effort to anticipate all the possible circumstances we may encounter as caretakers of the persistent world of EVE Online, there [sic] issues may arise that we had not foreseen. Our players are free-thinking, creative and sometimes crafty individuals who possess the ability to enter into situations or create scenarios unexpectedly. Therefore, this document should not be seen as all-inclusive, but rather to give our players a general idea of the guidelines we follow in dealing with these or similar cases. (EVE Online, Rules & Policies, http://www.eveonline.com/pnp/banning.asp) This paragraph provides some insight into the way in which CCP is attempting to style both the nature of EVE and its relationship with the players. CCP describes itself merely as a caretaker, thus purportedly distancing itself from true power over the EVE environment. Players are recognized as having a significant degree of autonomy and are crafty – again endorsing the nature of EVE as a world that encourages marginal behaviour. That said, there is a long list of conduct that will result in a player having their account suspended or permanently banned. These include: using an exploit tactic which has been publicly banned; duping, creation and distribution of an illegal third party program that disrupts 6

Metaplace, Terms of Service, https://www.metaplace.com/information/terms_service, accessed 14 September 2009. Metaplace is a platform for developing user-generated virtual worlds. Note that Metaplace also have a License Agreement, https://www.metaplace.com/information/beta_agreement, accessed 14 September 2009. See also Raph Koster ‘Declaring the Rights of Metaplace Users’ 15 September 2008, http://www.raphkoster.com/2008/09/15/declaring-the-rights-of -metaplace-users/, accessed 14 September 2009. Note, however, Koster’s own disclaimer that the rights will be followed unless ‘the fabric of the virtual space is threatened and so long as world creators and users are not in violation of the EULA or relevant national or local law.’

11


Journal of Virtual Worlds Research- Piracy vs. Control 12

game mechanics or gives an unfair advantage; and hacking the EVE servers or account of another player. Interestingly, the Policy also states that an immediate permanent ban of a player may be imposed for organizing or participating in a “corporation or group that is based on or advocates any anti-ethnic, antigay, anti-religious, racist, sexist or other hate-mongering philosophies.” This statement is somewhat at odds with the savage piratical world of war within EVE but reflects the need for CCP to be seen as a responsible corporate citizen CCP has taken the step of establishing the Council of Stellar Management, a committee of nine elected player representatives. The purpose of the CSM is to provide players with “societal governance rights.” The CSM is elected by player vote, one active account-one vote, with the candidates receiving the highest votes winning. The CSM is then empowered to identify the issues of concern to players and to pass them on to CCP (via the CCP Council) for resolution. Topics are raised by players through discussion threads. If a topic receives sufficient support it must be considered by the CSM. CSM members are instructed that in casting their vote regarding whether a matter should be brought before the CCP Council, they should consider whether the issue would benefit EVE society as a whole, rather than merely a select group within that society. The CCP Council is then obliged to consider and respond to as many issues put to it by the CSM as possible in person at a meeting in Reykjavik, Iceland (the headquarters of CCP). Each member of the CSM serves a term of six months and is only allowed to serve two terms (consecutive or non-consecutive). The background to the formation of the CSM is explained in the official EVE Online document, “The Council of Stellar Management,” which considers the evolution of society within EVE and attempts to place CCP within that framework. It concludes that: But since this entire socioeconomic dynamic must exist within the technical framework provided by CCP, it must have also evolved in part because of CCP. In that sense, the inhabitants of EVE could view their society as a dictatorship, since they have had little direct say in how it has been governed. Any influence citizens may have exerted was more a consequence of the vendorcustomer relationship, as expressed in the business terms of growth projections and client relations. Yet feedback between CCP and its customers – or members of the society- was always present in the interest of adapting the product to meet consumer demands. In examining this with a political view, describing the relationship as a “dictatorship” would be inaccurate, since it implies absolute control over the society with little regard to the opinion of those residing within it. On the contrary, constructive interaction and open dialogue between the legislator - CCP- and society members took place with the mutual aim of improving the society as much as possible. To the extent that the success of this arrangement can be measured, consider that as of the time of this writing, EVE’s society has grown from approximately 30,000 in 2003 to more than 300,000 in 2009.7 Therefore, CCP made the decision to specifically include player influence in the governance of EVE. This model was based on three core principles: all players would begin on an equal footing, all players must agree to the EULA (termed the social contract), and CCP would not interfere in individual player interaction in the virtual world, provided there is compliance with the EULA and TOS. The TOS and EULA are expressed to define the boundary which separates the real life and the virtual. While it is not without its critics and sceptics amongst the player base, the CSM is now in its third iteration and it provides an interesting working model of how governance can work in a gaming

7

See Oskarsson, The Council of Stellar Management (undated). This document was written by Peter Johannes Oskarsson, a researcher at CCP, pursuing a Masters in Philosophy. This is evident in the drafting of the document which has an extensive bibliography, including references to Rousseau, Kant, Habermas and Rawls.

12


Journal of Virtual Worlds Research- Piracy vs. Control 13

environment.8 The particular nature of the model chosen by CCP reflects well the nature of the game they are trying to foster. They support a hands-off approach and allow player disputes to be sorted out between players themselves. It provides players with a sense of consultation, a mechanism for carrying forward issues and grievances, and contains this within a strict timeline. It also provides a filtering mechanism as topics must receive a prescribed level of support in order to be taken forward. It will be interesting to see how the role of the CSM may evolve. Issues discussed by the CSM thus far range from technical matters, such as account security and real money trade (following on from the Ricdic fraud); procedural matters, such as the right of the CSM to vote for its own Chair and the number of votes needed to require the CSM to consider an issue; and game issues, such as eligibility to receive medals for in-game accomplishments and data shown on pod killmails. The records of meetings and issues raised at the CSM, including discussion, voting and outcomes, is publicly available on the EVElopedia (http://wiki.eveonline.com/wiki). Not surprisingly, CCP has hailed the CSM as a great success. (Garratt, 2008) This model may not be suitable for other gaming environments or social virtual worlds, where there is not such a cohesive user base with a dedicated commitment to the gaming world or virtual environment. It is unlikely, for example, that the residents of Second Life could develop a short list of issues for resolution by Linden Lab, which could command 25% of the votes of the discussion list participants. Many other environments have a broad player or user base, reflecting people of diverse ages, who drop in and out of the game or environment as it suits them. This suggests that the in-world governance mechanism must be tailored to reflect the nature of that particular virtual world. At minimum, it is suggested that in-world governance mechanisms should reflect the nature of the virtual world itself. It should be designed to seek user input in a manner that is consistent with any hierarchy or institutions existing within that world and it should be broadly based. No significant changes to the game philosophy or environment should be made without consulting that group. For example, a gaming world that does not espouse values of democracy would not be suited to a governance model based on democratic principles. These would need to be modified to suit the underpinning philosophy of the world itself. Users will generally accept and abide by the clearly articulated rules of the game or environment. Around the margins of those rules will be the areas for debate, such as where the rule is not supported by underlying code. However, in determining what is acceptable or not within an online community, the appropriate starting place will be the rules and norms of the community itself. Joshua Fairfield (2008b) has argued for recognition of community standards and in-world social norms by courts and real world lawmakers in the resolution of conflicts between players. In determining what conduct should be sanctioned by real world laws, courts should consider the scope of consent given by users with respect to their engagement with the online community. He recasts the ‘magic circle’ so that it defines the boundary created by consent between laws that have effect within the game space and real world laws. Players consent to the EULA which defines the top level laws and governs the relationship between the game owners and the players. Operating within that space are also the community norms which dictate what layers understand as the rules operating within the game, rules which may be enforced by real world laws. Fairfield also argues that courts should interpret EULAs in the light of recognised community norms and practice, noting that ‘community-defined norms often more accurately reflect the “social contact” between members of the community than do the EULAs’ (Fairfield, 2008a). In spaces

8

See CCP Xhagen’s dev blog for statistics regarding votes cast in the election, notably there was a 9.74% voter turnout for the election for CSM3, compared to 11.08% for CSM1 and 8.61% for CSM2, see http://www.eveonline.com/devblog.asp?a=blog&bid=664, accessed 16 July 2009.

13


Journal of Virtual Worlds Research- Piracy vs. Control 14

such as EVE where it is the general consensus that theft is part of the game play, no real world sanctions should lie. 9 As Fairfield acknowledges, this does not address the situation where “the creator-made rules conflict with community norms.” Nor does it address the consequences where real world laws or norms are contrary to behaviour and norms agreed to by the online such as slavery, exploitation and harassment. It is in this area that virtual worlds may need some insulation from real world laws.10

Conclusions Virtual worlds are currently predominantly created and owned by commercial enterprises. Regardless of the feeling of community that exists within them and the feeling of ownership that users derive from building creations in-world and cementing relationships there, that experience is essentially owned by a third party. Autonomy from real world laws is accepted and effective when the interests of the owner and the users coincide. However, where there is a divergence of interest as Lastowka (2009) points out, “Both game ‘owners’ and players may feel the temptation to invoke the power of the state when conflicts arise.” In the event of such a conflict, it is likely that the user will lose out, with their only option effectively the less than satisfying option of exit. At the moment, the cost of exit to users is extremely high, as they will lose their accumulated inventory, in-world money, and avatar, only to have to start anew in their new environment. Although initiatives are underway to facilitate the transfer of content between virtual worlds, such as the MPEG-V “Information Exchange with Virtual Worlds” project, until interoperable standards are developed and adopted by virtual world developers, users will lose their investment on exit.11 It is suggested that national governments should facilitate the development of virtual worlds by creating consistent and supportive legal frameworks prescribing a minimum level of regulation. Any laws implemented by domestic governments should avoid the disruption and fragmentation of the users’ inworld experience as a consequence of the implementation of different laws for the same platform across national boundaries. Therefore, aspects of virtual worlds that might be appropriate for real world regulation would include the standardization of the basic terms of service, so that users could become familiar with the core aspects of such documents. Any deviation from the standard terms and conditions would have to be specifically brought to the user’s attention. It may also be appropriate to have regulation regarding how changes to the terms of service must be brought to the user’s attention. In addition to standardized or transparent terms of service, regulation imposed external to the virtual world environment should relate to key issues such as ownership of intellectual property, privacy, surveillance, and age appropriate content. Other matters occurring within the game or role play environment should be dealt with on the basis of established community norms and enforcement mechanisms, such as banning, expulsion, suspension, or a reduction in status or powers. Above all, any regulation of virtual worlds should be sensitive to the particular needs of the relevant virtual world community and respect the diversity of individual users and their need to explore individual experiences. It should therefore avoid regulation of online behaviour where the community accepts certain conduct as part of the game play or environment space. This is why stealing a ship and podding your enemy’s avatar is perfectly acceptable in EVE. 9

It may be appropriate for the terms of the TOS or EULA to also operate as a contract between members, similar to the operation of a Company’s Constitution under Corporations Law, this aspect of virtual worlds governance will be the subject of further study by the author. 10 Further work is also needed on this issue and on the issue of the potential consequences of the online behaviour. 11 See Summary of the MPEG-V Project on Information Exchange with virtual worlds, available at http://www.chiariglione.org/mpeg/working_documents.htm#MPEG-V, accessed 23 July 2009.

14


Journal of Virtual Worlds Research- Piracy vs. Control 15

Bibliography

Bartle, R. (2006). Why Governments aren’t gods and gods aren’t governments. First Monday, 7. Retrieved from: http://firstmonday.org/issues/issue11_9/bartle/index.html. Blizzard. (2008) World Of Warcraft® Surpasses 11 Million Subscribers Worldwide. October 28, 2008. Retrieved from: http://www.blizzard.com/us/press/081028.html. Burke, T. (2004). Play of state: Sovereignty and governance in MMOGS. Unpublished manuscript, retrieved from: http://www.swarthmore.edu/SocSci/tburke1/The%20MMOG%20State.pdf. City of Heroes. (2009). Issue 14: Mission Architect, FAQ. Retrieved from: http://boards.cityofheroes.com/showflat.php?Cat=0&Number=13119948. Consalvo, M. (2007). Cheating: Gaining Advantage in Videogames. Cambridge: MIT Press. Council of Europe. (2008). Human Rights Guidelines for online games providers. Developed by the Council of Europe in co-operation with the Interactive Software Federation of Europe. ENISA (2008). Virtual Worlds, Real Money Security and Privacy in Massively-Multiplayer Online Games and Social and Corporate Virtual Worlds. Retrieved from: http://www.enisa.europa.eu/pages/02_01_press_2008_11_20_online_gaming.html. Fairfield, J. (2008a). Anti-social contracts: The contractual governance of virtual worlds. McGill Law Journal, 53, 427. Fairfield, J. (2008b ). The magic circle. Washington & Lee Public Legal Studies Research Paper Series, Working Paper No 2008-45. Retrieved from: http://ssrn.com/abstract=1304234. Garratt, P. (2008) EVE player council “pleased” with first CCP summit. VG247. Retrieved from http://www.vg247.com/2008/06/26/eve-player-council-pleased-with-first-ccp-summit. Humphreys, S. (2008). Ruling the virtual world: Governance in massively multiplayer online game. European Journal of Cultural Studies, 11, 149. Humphreys, S. (2009). Discursive constructions of MMOGs and some implications for policy and regulation. Media International Australia, 130, 53. Koster, R. (2000). A declaration of the rights of avatars. Retrieved from: http://www.raphkoster.com/gaming/playerrights.shtml. Lastowka, G. (2009). Planes of power: EverQuest as text, game, and community. Game Studies, 9 (1). Retrieved from: http://gamestudies.org/0901/articles/lastowka. Morrissey, J. (2009). Mission architect: How are you going to manage that? Gamasutra. Retrieved from: http://www.gamasutra.com/view/feature/3995/mission_architect_how_are_you_.php. Myers, D. (2008). Play and punishment: The sad and curious case of twixt. Unpublished manuscript, retrieved from: http://www.masscomm.loyno.edu/~dmyers/F99%20classes/Myers_PlayPunishment_031508.doc. Oskarsson, Petur Johannes The Council of Stellar Management: Implementation of Deliberative, Democratically Elected, Council in EVE, CCP, undated, available from: http://wiki.eveonline.com/wiki/What_is_the_CSM accessed 14 July 2009 Oskarsson, P. J. The CSM: A Summary Explanation of the Council of Stellar Management. CCP. Retrieved from http://wiki.eveonline.com/wiki/What_is_the_CSM. Postigo, H. (2008). Video game appropriation through modifications: Attitudes concerning intellectual property among modders and fans. Convergence, 14, 59.

15


Journal of Virtual Worlds Research- Piracy vs. Control 16

Reynolds, R. (2009). Human rights & the ‘online game provider’. Terra Nova. Retrieved from: http://terranova.blogs.com/terra_nova/2009/04/human-rights-the-online-game –provider.html. Rossignol, J. (2009). Ragdoll metaphysics: Good grief, the victory of Eve’s space goons. Offworld. Retrieved from: http://www.offworld.com/2009/02/ragdoll-metaphyscs-good-grief.html. Thompson, M. (2009). Virtual Theft in EVE online creates run on bank. Ars Technica. Retrieved from: http://arstechnica.com/gaming/news/2009/07/virtual-theft-in-eve-online-creates-run-on-bank.ars. UK-OECD Workshop on Innovation and Policy for Virtual Worlds (2009). Retrieved from: http://www.oecd.org/document/61/0,3343,en_2649_34223_42316797_1_1_1_1,00.html.

16


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Virtual Worlds, Collaboratively Built Philip Rosedale, Linden Lab

Keywords: virtual worlds; Second Life; standards; Open Source.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- VWs, Collaboratively Built 4

Virtual Worlds, Collaboratively Built Philip Rosedale, Linden Lab

Even before they really existed, I deeply believed that virtual worlds would have a profound impact on the real world, ultimately affecting the lives of people worldwide, in much the same way that the World Wide Web itself has brought about a dramatic transformation in how we communicate. Now that Second Life—and more broadly, virtual worlds—have become at least "worthy of criticism,� I am all the more convinced that this will prove to be true. We will soon see virtual worlds expand from millions of active users to billions. So what is the best way to proceed as a company that is leading this initial growth? Should we be more open or more closed in our efforts? This is a complicated question when applied to a complex system like Second Life, with its many different interfaces, software modules, and potential areas for standardization. But the overall scale and impact of the virtual world suggests at a high level that the best outcome is to be very open, with respect both to software code and the numerous standards that connect it. Its similarity to the web is undeniable, and the web was built on open standards by a large number of different people, companies, and countries. The same will likely be true here. Undertakings of this scope are dangerously hampered by attempts to make them restricted, proprietary, or opaque. Like the web, we are all going to need to rely increasingly on virtual worlds being stable, reliable, and safe. As virtual worlds carry more and more economic and creative value, and affect the lives of more people, they will need to be inspected and improved by everyone using them. We should not trust a single company or organization to control them, any more than with the web. With open platforms, even the threat of competitive risk or arbitrary judgment from a controlling provider can hamper the creative energy of developers. A good example of this effect is modern smart-phones like the iPhone, Blackberry or G1 in comparison to earlier network-provider dominated cellphone applications. Like the web, virtual world companies will be most successful by providing only the minimal scaffolding for the development of the rich content experiences that will bring more and more usage. Additionally, Second Life has already been a benefit to people's lives, suggesting more generally that Virtual Worlds can become a common human resource and a force for good. In Second Life we have seen people unable to walk in real life uplifted and empowered by their ability to walk and even fly in the virtual world. There are thousands of people making incomes in Second Life that in many cases would be unavailable to those same people in real life. In Second Life you can get a tour of a Japanese garden from the Japanese person who created it, complete with on-the-fly translation to help you communicate. So if Second Life and virtual worlds are of general benefit and utility to humanity, we have a responsibility to make them available as broadly as possible and as quickly as we can. I believe that the way to do this most effectively is to use open standards and an open development process.

4


Journal of Virtual Worlds Research- VWs, Collaboratively Built 5

The historical development of Second Life was also dependent on standards and openness. When we set out to build Second Life, we also didn’t intend to re-create everything anew. Instead, the team at Linden Lab drew upon countless examples of prior work in games and virtual worlds and focused innovation on policy—the platform and the tools that mattered most to our community. Along the way, Linden Lab made use of countless standards, leveraging existing technology. The decision to be very open from the start was risky but paid off; many more closed and secretive competitors to Second Life failed to survive. Being open also eliminates the friction of moving information in and out of the virtual world. The complexity and uncertainty of virtual worlds favored a process where features were developed more in the open and early feedback was able to direct development. We simply didn't feel smart enough to be able to hide our ideas from end-users! Like the web, though even more so, virtual worlds rely on the interconnectedness of many different components to achieve an immersive, compelling experience. Wherever feasible, our path has been to open the avenues for others to develop and explore these components. The result has been explosive growth and development both within the Virtual World, and in the technology that enables it. The creation of Second Life has been truly a collaborative experience, and couldn't have evolved any other way. One major avenue of collaboration is open source: we use, contribute, and have even spun off open source projects for all aspects of Virtual Worlds. We have open sourced our viewer, creating a whole ecosystem around that functionality. Last summer we ran a great experiment in collaboration between Second Life and OpenSim. And we have always involved the Residents of Second Life in a deep and continuing discussion of how the technology and world need to evolve to meet their needs. I feel that virtual worlds, as a whole, are a paradigm that will best be developed if researchers and developers of this technology are able to work together in increasing numbers and ways. Going forward, we will continue to expand the collaborative methods we’ve used in the past, and take the first steps towards establishing base standards essential to further exploration and development. I believe that building on each other’s work through open research and open source and open standards, is the only way for Virtual Worlds to reach their full potential. I invite you to come build with us.

5


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Universal Design: Including Everyone in Virtual World Design Alice Krueger, Ann Ludwig and David Ludwig Virtual Ability, Inc.

Abstract Three broad approaches exist to the issue of accessibility design within virtual worlds. Our intent is to stimulate the thinking of content designers within virtual worlds about these approaches, so that they can consider which approach best fits the desired intent and audience of their creation.

Keywords: universal design; accessibility; design; virtual worlds.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Universal Design 4

Universal Design: Including Everyone in Virtual World Design Alice Krueger, Ann Ludwig and David Ludwig Virtual Ability, Inc.

Real World Disabilities People with real world disabilities, with impairments that may be permanent, temporary, or related to increasing age, are present in casual online games in higher proportions (~20%) than in the real world (~15%) (Information Solutions Group, 2008). Virtual worlds do not remove all accessibility issues experienced by people with disabilities in the real world, and in fact, exacerbate some existing issues and introduce additional ones. Some disabling real life conditions, e.g., paralysis or loss of legs, do not affect functioning in virtual worlds. Other conditions -print impairment, hearing impairment, and keyboard/mouse use impairment - may be more disabling in a virtual environment than in real life. Therefore, the design of virtual worlds should take into account those disabling conditions that affect a person's functioning in those worlds. Avatar Identity In the real world, most people can only change their appearance through modification of hair style and coloring, makeup, and clothing. Virtual worlds allow much more choice in how participants present themselves to other inhabitants. The chosen avatar is an embodiment of selfhood. Some choose avatars that reflect either their idealized vision of themselves or a totally fantastic creation, while others choose avatars that more closely mimic their real life identity. Some choose to be young beautiful and perfect humans, others become dragons, and some choose to have their avatars more reflective of their real-life age, physical characteristics, and disability status. Accessibility Accessibility of the virtual world is a function of the design of structures and landscapes inside the world, whether created by the developer (such as in World of Warcraft) or by the citizens of the world (such as Second Life速). Accessibility of the virtual world contrasts with accessibility issues related to access to and within the world. Those issues include signing up and connecting to the user interface. That type of accessibility is covered by existing web accessibility standards1 related to allowing people who use assistive technology to function similarly to those who don't use it, and is not the focus of this paper.

1 In the US, standards include Section 508 (www.section508.gov/) and Title II of the ADA (http://www.ada.gov/pcatoolkit/chap5toolkit.htm), and internationally the Web Accessibility Initiative (http://www.w3.org/WAI/)

4


Journal of Virtual Worlds Research- Universal Design 5

Universal Design Universal Design (UD), developed at North Carolina State University, is the construction of environments so that all people may use them without needing specialized designs as adaptations for disabilities. In operation, the principles of UD benefit all users, to the greatest extent possible, not just those with disabilities. For example, curb cuts, which are critical to wheelchair users, are also helpful for people with wheeled luggage, grocery carts, or strollers. Focusing on common needs of all people avoids segregating those who need adaptations, because UD is appropriate for all users.

Three Approaches to Accessibility of Virtual Worlds

The first two approaches described below represent ends of a spectrum of use of real world accessibility standards. The third approach focuses on accessibility issues to virtual worlds, rather than on those of the real world. Approach One - No Accessibility Standards For much of the current state of virtual worlds, this is the default approach: minimal consideration has been given during design or construction to accessibility issues. This may be the approach used when the objective of the designer is historical accuracy or artistic creativity. Structures and landscapes in virtual worlds that are representations of actual historical places generally will not be different in accessibility from what they represent. In the real world, it would be prohibitive and potentially destructive to retro-fit the Parthenon to meet accessibility standards. Similarly, if a design is inherently inaccessible because of certain creative features, real world accessibility standards may not be appropriate. Approach Two - Emulation of Real Life Standards Using the second approach, virtual world environments emulate, as far as possible given the design and construction constraints of the particular online world, the features desirable in the physical world. Features required by accessibility standards in the real world are imported into virtual worlds because they are important in the real world.

5


Journal of Virtual Worlds Research- Universal Design 6

For instance, ramps allow virtual wheelchair users to move from pathway to building or between building levels. In the emulation approach, these ramps have handrails and access signage as they would in the physical world. Grass, sand, and deep carpet textures for the ground are avoided because they are difficult to move a wheelchair across. Standards for this kind of accessibility exist in the real world2.

Figure 1. This model accessible home's kitchen shows knee room under counters, adaptive cutting board, and frontmounted controls on the stove.

This approach creates a learning opportunity for non-disabled people to see the elements of real-world accessibility in action. The complete accuracy and familiarity of this approach may be comforting to some people with disabilities who expect virtual worlds to mirror their real world environment.

2

In the US, ADA Guidelines for Building and Facilities (http://www.access-board.gov/adaag/html/adaag.htm), the Uniform Federal Accessibility Standards (http://www.access-board.gov/ufas/ufas-html/ufas.htm), and the Fair Housing Act (http://www.fairhousingfirst.org/fairhousing/requirements.html

6


Journal of Virtual Worlds Research- Universal Design 7

Approach Three - Universal Design of Virtual Worlds This virtual world-centric approach involves using the unique features of virtual worlds to provide accessibility appropriate to those worlds, rather than emulating real world accessibility standards. This perspective takes into account the impairments specific to virtual worlds to create accessibility in landscapes, structures, communication, and movement. In virtual worlds, input is restricted to vision and hearing. Conveying print or visual information using small fonts, minute details, semi-transparent textures, or other difficult-to-see features may make the information inaccessible to some users. To make information accessible to those with vision or reading impairments, designers should examine font size, as well as background and text color for readability; offer the same information in sound files; and create objects with descriptive names. These practices can also benefit people with dyslexia and nonEnglish speakers and will make information more readily available to many, not just to those with vision impairments. Consideration should be made for people with hearing impairment. When presentations are conducted in voice, they should be simultaneously transcribed into print (voice-to-text or V2T) to avoid excluding those who cannot hear. Sound signals, such as for starting a race, should also be given in a simultaneous visual manner. Keyboard- or mouse-use impairment can make movement in a virtual world challenging. Many people find spiral staircases difficult to navigate. Climbing one of these generally requires tightly controlled multi-keystroke keyboard input. Straight stairs or ramps that do not require absolutely accurate aim are easier to navigate. People with manual dexterity issues sometimes fall off walkways that are suspended and without borders. Designers can provide guidance using themed railings, landscaping, or invisible barriers to subtly guide people along the path. In meeting spaces, people in wheelchairs should have a choice to sit in multiple areas of the seating, not all be segregated into a single seating area. Ideally, there are no stairs, only ramps, and every seat has a clear view of the presenter.

7


Journal of Virtual Worlds Research- Universal Design 8

Figure 2. This accessible auditorium features wide paths, ramp access, and multiple areas for wheelchair seating, but no accessibility signage nor handrails.

Using this approach, the structures, landscapes, communication, and movement of the virtual world will be accessible to all its inhabitants without designating specific adaptive features for some subset of them. Implementation of Universal Design principles means the virtual world will be more convenient and accessible to everyone. Guidelines and descriptions of best practices for Universal Design of Virtual Worlds are starting to emerge (Zielke, Roome, and Krueger, 2009). Recommendations The authors do not claim that any of the three approaches to accessibility of virtual worlds is inherently better than another. The chosen approach must fit the creator’s intent and audience. Content creators in virtual worlds should consider these approaches and purposefully select one to follow. However, we advocate that, in most virtual world situations, designers incorporate accessibility principles that are appropriate and germane for that virtual world, and not be constrained to a literal translation of Real World standards and requirements. We conclude that this is a definition of Universal Design of Virtual Worlds.

8


Journal of Virtual Worlds Research- Universal Design 9

Bibliography

Center for Universal Design, College of Design, North Carolina State University. (2008). About UD: Universal design principles. Retrieved September 28, 2009, from http://www.design.ncsu.edu/cud/about_ud/udprinciples.htm Information Solutions Group. (2008, June 11). Disabled gamers comprise 20% of casual video game audience. Retrieved September 28, 2009, from http://www.infosolutionsgroup.com/press_release_E.htm Zielke, A., Roome, C., & Krueger, A. B. (2009). A composite adult learning model for virtual world residents with disabilities: A case study of the Virtual Ability Second Life速 island. Journal of Virtual Worlds Research, 2 (1).

9


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Lindman Design: Virtual World Experiences By Ludvaig Lindman, Lindman Design

Keywords: virtual worlds; business; entrepreneurship.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research - Lindman Design 4

Lindman Design: Virtual World Experiences By Ludvaig Lindman, Lindman Design

Creating a Virtual Company With Zero Knowledge I founded Lindman Design back in mid 2007 as a virtual company. It started as a small project and its main intention was to explore the new and exciting economy of a rapidly growing community, named SecondLife®. First of all, starting business in a virtual world is a lot easier than in real life. You basically say Here I am and start offering your products. However, before you can sell any products, you have to create them. Product creation, especially successful product creation, involves a lot of knowledge and skills. So, how are you gonna do that if you don‘t know anything about these things? Well, that‘s what I love the most about Second Life®. You invest a certain amount of money and then simply give it a shot. In real life it would be very dangerous, as you would have to take a much higher risk. Properly speaking, you can only win in a virtual world, because even if you fail and lose a few hundred dollars, you gain something much more valuable: knowledge and experience! These are, of course, two very important factors of successful business in both real and virtual environments. Learning and Understanding Economy of a Virtual World From my point of view, you only have two opportunities to understand a virtual world‘s economy: You can either test your ideas against others and go for the more profitable ones, or you can directly learn from more experienced users. However, the latter is a bit more difficult, as successful people tend to keep their secrets to themselves and don‘t want anyone else to get involved. So another technique would be to actually combine both methods by trying your own ideas, but at the same time get inspiration from your competitors. A lot of people who do that make a big mistake by actually confounding a decent amount of inspiration with doing an exact replica of a competitor‘s product. You certainly want to avoid anything like that and so it‘s most important to keep one‘s vision clear and focused on the topic. Becoming a Developer and Making Successful Products I guess, I can go a bit more into detail by talking about my most successful product in Second Life®: Lindman Weather Systems. When I started my virtual career, I had absolutely no idea about doing professional business. Besides that, I had no idea about graphics design and was especially clueless when it came to writing proper computer code. It took several months and many sleepless nights to understand these things and to build up some skills. In October 2007 I started working on Lindman Weather System and two months later on Lindman Ultimate 4


Journal of Virtual Worlds Research - Lindman Design 5

Weather System. These products are very well known among the Second Life® community and have many famous users, such as Pathfinder Linden, for example. There were already weather systems on the market at that time, so I first started doing some quality research by comparing my weather system feature list to my competitors‘. That way I could find many ways to improve and later fine tune my product. On top of that, it is simply good practice to listen to feedback from friends and early adopters.

The Product is Ready for Sale - Now What? You can have the best piece of software available, but if no one knows about it, it won‘t become a great seller. Given that very fact, good marketing is at least as important as the quality of your product - if not even more important. With that information in mind, I started doing typical Second Life® classifieds (you pay for them once a week, directly to Linden®). Besides that, I explored new market places, such as SLExchange (XStreetSL now) and Onrez (no longer available). These platforms offered an off grid space to sell and advertise your products and helped to gain even more attention. Again, learning about the proper marketing of your products is the same process as that of learning about doing business in general. You should either try out different concepts, get inspired by others, or combine both of these things. Big companies often make the mistake of investing crazy amounts of money into their marketing, and forgeting about the end-users‘ needs. I learned about that pretty soon and decided to offer the best support possible. I also call it „Shock and awe“ support. Even if the first version of your product has some issues, most customers will understand, if you do show them that you care. It helps to build up a certain reputation, and thereby gain even more customers. They tell their friends who later tell their friends, and so on. Professionals call this type of marketing „viral marketing“ and especially in a virtual world a good reputation can be priceless. Email communication travels at the speed of light and so does information. On the other hand, bad words can certainly do some serious harm to one‘s business. I‘ve had many customers telling me that they only bought my products because of my reputation. Friends usually trust their friends and therefore make decisions easier / faster on the basis of friendly advice. SL / RL - How Metaverse Platforms Influence a Developers‘ Personal Life It takes a lot of time to develop products, work on the marketing and later do customer support. If you are a single person trying to get everything done, it‘s obvious that you will have to cut some time off your private life. This can be dangerous, as people tend to neglect their real life duties, their friends and other important things. One has always to make up his/her mind and stay focused. That way it is possible to enjoy being part of a virtual community and stay successful in real life as well. 5


Journal of Virtual Worlds Research - Lindman Design 6

Virtual Worlds Spin Faster - Economy in Second Life® between 2007 and 2009 from a Developer‘s Point of View Back in 2007 Second Life® was state of the art. You could read about it in all the different magazines, it was on TV all the time and everyone else was talking about it. You could literally feel excitement everywhere. I still think that early 2007 till mid 2008 were the best days in Second Life®. The real world crisis and other decisions by the company that runs Second Life® surely had an impact on the economy. Although it‘s getting slightly better and sale numbers are growing again, I think that it needs some new inventions to attract more new customers. However, I still think that Second Life® is the best place to learn a lot about business, marketing and especially yourself. Lindman Design finally became a real life company and is proud to name Second Life® and the users that built up that community as its mentors.

6


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Immersive 3D Environments and Multilinguality: Some Non-Intrusive and Dynamic e-learning-oriented Scenarios based on Textual Information By Samuel Cruz-Lara, Nadia Bellalem, Lotfi Bellalem and Tarik Osswald, Nancy-UniversitĂŠ, LORIA / INRIA Nancy Grand-Est

Abstract Virtual worlds may become primary tools for learning many aspects of history, for acquiring new skills, for job assessment, and for many of our most cost-effective and productive forms of collaboration (Metaverse Roadmap Report, 2007). We will present some non-intrusive and dynamic e-learning based scenarios related to multilingual textual information within an immersive 3D environment. We refer to these scenarios as non-intrusive because they do not interrupt the user’s activities within the immersive 3D environment. Rather, they enrich his/her individual experience. Obviously, these scenarios need to be dynamic, because user interaction occurs mostly in real-time. In addition, these non-intrusive and dynamic e-learning-oriented scenarios exemplify how a standardized framework for textual multilingual support associated with an immersive 3D environment, may considerably change the way people usually deal with multilingual information and with language learning on the Internet. We would like to illustrate that, in the context of immersive 3D environments, dealing with multilinguality is more than just localization and real-time automatic translations. Finally, the analysis of these non-intrusive and dynamic e-learning based scenarios lead us to propose a general architecture allowing immersive 3D environments to deal with multilinguality in the most general and dynamic way possible. Keywords: multilinguality; virtual worlds; standardization; non-intrusive and dynamic scenarios; language learning. This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 4

Immersive 3D Environments and Multilinguality: Some Non-Intrusive and Dynamic e-learning-oriented Scenarios based on Textual Information By Samuel Cruz-Lara, Nadia Bellalem, Lotfi Bellalem and Tarik Osswald, Nancy-Université, LORIA / INRIA Nancy Grand-Est

Linguistic information plays an essential role in the management of information, as it bears most of the descriptive content associated with more visual information. Depending on the context, it may be seen as the primary content (text illustrated by pictures or videos), as documentary content for multimedia information, or as one among several possible information components in specific contexts such as interactive multimedia applications. Linguistic information can also appear in various formats: spoken data in an audio or video sequence, implicit data appearing on an image (caption, tags, etc.) or textual information that may be further presented to the user graphically or via a text to speech processor. In this context dealing with multilinguality is crucial to being able to adapt the content to specific users’ targets. It has to take into account situations in which the linguistic information contained in a multimedia sequence is already conceived in such a way that it can be adapted on the fly to the linguistic needs of the user. It also has to take into account situations in which the content should be adapted by additional processes before it is presented to the user. The extremely fast evolution of the technological development in the sector of Communication and Information Technologies, and in particular, in the field of natural language processing, makes the question of standardization particularly acute (Cruz-Lara et al., 2008). The issues related to this standardization are of industrial, economic and cultural nature. The scope of activities in localization and translation memory (TM), as well as any type of online multilingual customization (subtitling, iTV, karaoke) is very large, and numerous independent groups are working on these aspects, such as and among others: • LISA (Localization Industry Standards Association; http://www.lisa.org); • OASIS (Advancing Open Standards for the Information Society; http://www.oasisopen.org/home/index.php); • W3C (World Wide Web Consortium. http://www.w3.org); • ISO (International Organization for Standardization; http://www.iso.org). Under the guidance of the above-mentioned groups, many formats have been developed. Some of the major formats of specific interest for localization and translation memories are: • TMX (Translation Memory exchange; http://www.lisa.org/Translation-Memorye.34.0.html); • XLIFF (XML Localization Interchange File Format; http://docs.oasisopen.org/xliff/xliff-core/xliff-core.html); • OAXAL (Open Architecture for XML Authoring and Localization. http://www.oasisopen.org/committees/tc_home.php?wg_abbrev=oaxal); • ITS (Internationalization Tag Set; http://www.w3.org/TR/its/). 4


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 5

There are many identical requirements for all the formats, irrespective of the differences in final output. Second Life, in common with other virtual world applications, has opened up the potential for users and learners, teachers and trainers, policy makers and decision makers to collaborate easily in immersive 3D environments. Through their presence as an avatar in the immersive space, the user can readily feel a sense of control within the environments and more easily engage with the experiences as they unfold (De Freitas, 2008). In this paper, we would like to illustrate that within immersive 3D environments dealing with multilinguality involves more than just localization and translation issues. The paper is structured as follows: firstly, we define some important concepts such as Multilinguality, Globalization, Localization, Internationalization, and Automatic Translation; secondly, we explain why normalization and standardization are key-issues in the framework of multilinguality, and we introduce the Multi Lingual Information Framework - MLIF (ISO CD 24616; http://mlif.loria.fr/); thirdly, we introduce several non-intrusive and dynamic e-learning scenarios that allow us to illustrate how multilinguality may be approached within immersive 3D environments. Finally, from the analysis of these scenarios, we propose a general architecture allowing immersive 3D environments to deal with multilinguality in the most general and dynamic way. Multilinguality This section provides some important definitions in the frame of multilinguality, such as localization, internationalization and automatic translation. It should be noted that the scenarios described further mainly focus on automatic translation. Globalization: Localization and Internationalization As the LISA association explains, globalization can best be thought of as a cycle rather than a single process, as shown in Figure 1.

Figure 1: The Globalization process (Wikimedia)

5


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 6

In this view, the two primary technical processes that comprise globalization internationalization and localization- are seen as part of a global whole: • Internationalization encompasses the planning and preparation stages for a product in which support for global market is built in by design. This process means that all cultural assumptions are removed and any country- or language-specific content is stored externally to the product so that it can be easily adapted. • Localization refers to the actual adaptation of the product for a specific market. It includes translation, adaptation of graphics, adoption of local currencies, use of proper forms for dates, addresses, and phone numbers, and many other details, including physical structures of products in some cases. If these details were not anticipated in the internationalization phase, they must be fixed during localization, adding time and expense to the project. In extreme cases, products that were not internationalized may not even be localizable. Automatic translation Automatic or Machine translation (MT) is a sub-field of computational linguistics that investigates the use of computer software to translate text or speech from one natural language to another. At its basic level, MT performs simple substitutions of words in one natural language for words in another. Using corpus techniques, more complex translations may be attempted, allowing for better handling of differences in linguistic typology, phrase recognition, and translation of idioms, as well as the isolation of anomalies. Current MT software often allows for customization by domain or profession (such as weather reports) - improving output by limiting the scope of allowable substitutions. This technique is particularly effective in domains where formal or formulaic language is used. It follows then that machine translation of government and legal documents more readily produces usable output than conversation or less standardized text (Wikipedia). Standardization Nowadays, an increasing number of standards are frequently being used within most scientific and technical domains. Translation and localization activities simply cannot remain isolated from this important and novel situation. The advantages of normalization are currently fully recognized by most professional translators. Indeed, using standards means working with a high level of quality, performance, and reliability within a very important market that is becoming more and more global and thus more and more challenging. Indeed, the standards combine simplicity and economy by reducing the planning and production costs, and by unifying several kinds of terminology (i.e. validated vocabulary) and several kinds of products. At the national and international levels, standards stimulate cooperation between different communities of trade while ensuring interoperability within information exchanges and reliability of all generated results by using standardized methods and procedures; this is why normative information has become fundamental. The scope of research and development within the localization and translation memory process development is very large. Within this context, several industrial standards have been developed, including TMX, XLIFF and OLIF. However, when we closely examine these different multilingual textual information representation standards or formats by subject field, we find that they have many overlapping features. All the formats aim at being user-friendly, easy6


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 7

to-learn, and at reusing existing data or knowledge bases. All these formats work well in the specific field they are designed for, but they lack the synergy that would make them interoperable when using one type of information in a slightly different context. Modelization corresponds to the need to describe and compare existing interchange formats in terms of their informational coverage and the conditions of interoperability between these formats. One of the issues here is to explain how a uniform way of documenting such data takes into account the heterogeneity of both the formats and the descriptors. We also seek to answer the demand for more flexibility in the definition of interchange formats so that any new project may define its own data organization without losing interoperability with existing standards or practices. Such an attempt should lead to more general principles and methods for analyzing existing multilingual databases and mapping them onto any chosen multilingual interchange format. Normalization: a key issue for translation The translator is the most important element of the translation process: thanks to their experience, their knowledge and the automatic translation services they may use, they ensure that the translated document is accurate with respect to the original one (G贸mez & Pinto, 2001). A good translation does not only need linguistic awareness but it also needs a good knowledge of the technical or scientific field of the documents that have to be translated. Most texts are addressed to specialists and ignorance of the specialized expressions can justifiably cause rejection by the reader. In the same way, English technical terms - whose equivalents nevertheless exist in target language - are often used untranslated (as is the case in French, for example). Obviously, technical translations are not limited to data processing or computer science translations. Technical translators must have, in addition to their knowledge, a highquality set of documents. Even the more knowledgeable must continuously seek advice from technical documents (i.e. journals, specialized dictionaries, magazines, data bases, etc.). These technical documents constitute a set of essential tools that allow a translator or a translation service to analyze the information located on the covered subject. So they must evolve constantly by acquiring new information and new experiences, so as to obtain additional linguistic and nonlinguistic knowledge related to their domains (Hurtado Albir, 1996). Given that they are high-level models for technical specifications (i.e. symbols, definitions, codes, figures, methodology, etc), standards constitute a fundamental tool for translation, because they will provide abundant, exact, and above all, interoperable and reliable information. Unfortunately, several fields - and especially translation - have numerous and often non-compatible standards. This requires a parallel activity of normalization of these standards in order to ensure, at least, a minimal degree of compatibility. This activity constitutes the main issue of the "Documentation on standards" or the "Structured set of standards" (Pinto, 1993). As the texts to be translated may be related to a wide range of domains, the documentary activities applied to the standards can also guide and direct the translator in the search for standards relative to a given field. The production of standards is sometimes so prolific that it makes it quite difficult to understand exactly what method or procedure has to be used. The documentary techniques bring essential assistance. The services of information and dissemination of the national and international organizations of standardization give easy access to all this information. The standards issued by the organizations of standardization (i.e. ISO, W3C and LISA) are becoming more and more accepted by the translation services and the 7


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 8

translators to achieve a high-level quality in their services and their products. Standardization thus becomes a synonym of quality for the customers who desire the best possible results. In addition, standards also represent an essential tool for translators and translation services, as they aim at creating normalized terminological and methodological linguistic resources, in order to improve the national and international exchanges as well as cooperation within all possible fields. It is also necessary to point out the important efforts of standardization of ISO and W3C in the field of information technologies, especially those related to the computerized processing of multilingual data and the Internet. Standardization is present during the whole process of translation. So, the access to the normative information is necessary within the activity of translators and translation services. Within this task (i.e. standardization) human translators are assisted not only by resource centers in charge of disseminating the information of the standards worked out by the national and international organizations, but also by other specialized private agencies (i.e. PRODOC; PROfessional DOCumentation; http://www.prodoc.de) whose main objective is to advise the customers with regards to standards. In the same way, the Internet allows access to thousands of web sites (i.e. terminology trade, databases, etc.) that provide important information related to using standards, as well as access to several interesting research projects in progress whose objectives are the development of standards and recommendations in the field of the "industry of language". MLIF: the Multi Lingual Information Framework MLIF provides a generic platform for modeling and managing multilingual information in various domains: localization, translation, multimedia, document management, digital library, and information or business modeling applications. MLIF provides a metamodel and a set of generic data categories for various application domains. MLIF also provides strategies for the interoperability and/or linking of models including (but not limited to): • XLIFF; • TMX; • OAXAL; • SMILText (Synchronised Multimedia Integration http://www.w3.org/TR/2008/REC-SMIL3-20081201/smil-text.html); • ITS.

Language

Text;

What is a metamodel? A metamodel does not describe one specific format, but acts as a high level mechanism based on the following elementary notions: structure, information and methodology. A metamodel can be defined as a generic structure shared by all other formats and which breaks out the organization of a specific standard into basic components. A metamodel should be a generic mechanism for representing content within a specific context. Actually, a metamodel summarizes the organization of data. The structuring elements of the metamodel are called “components” and they may be “decorated” with information units. A metamodel should also comprise a flexible specification platform for elementary units. This specification platform should be coupled to a reference set of descriptors that should be used to parameterize specific applications dealing with content.

8


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 9

What is a data category? A metamodel contains several information units related to a given format, which we refer to as â&#x20AC;&#x153;Data Categoriesâ&#x20AC;?. A selection of data categories can be derived as a subset of a Data Category Registry (DCR; ISO 12620; http://www.isocat.org/). The DCR defines a set of data categories accepted by an ISO committee. The overall goal of the DCR is not to impose a specific set of data categories, but rather to ensure that the semantics of these data categories is well defined and understood. A data category is the generic term that references a concept. There is one and only one identifier for a data category in a DCR. All data categories are represented by a unique set of descriptors. For example, the data category <languageIdentifier> indicates the name of a language which is described by 2 [ISO 639-1] or 3 [ISO 639-2] digits. A Data category Selection (DCS) is needed in order to define, in combination with a metamodel, the various constraints that apply to a given domain-specific information structure or interchange format. A DCS and a metamodel can represent the organization of an individual application, and the organization of a specific domain. Specifying the Multi Lingual Information Framework Linguistic structures exist in a wide variety of formats ranging from highly organized data (e.g. translation memory) to loosely structured information. The representation of multilingual data is based on the expression of multiple views representing various levels of linguistic information, usually pointing to primary data (e.g. part of speech tagging) and sometimes to one another (e.g. references or annotations). The following model identifies a class of document structures that could be used to cover a wide range of multilingual formats, and provides a framework that can be applied using XML. All multilingual standards have a rather similar hierarchical structure but they have, for example, different terms and methods of storing metadata relevant to them. MLIF is being designed in order to provide a generic structure that can establish a basic foundation for all these standards. From this high-level representation we are able to generate, for example, any specific XML-based format. We can thus ensure the interoperability between several standards and their committed applications. Figure 2 describes the MLIF metamodel.

Figure 2: MLIF metamodel with related Data Categories

9


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 10

It is made of the following components: • MLDC (MultiLingual Data Collection) represents a collection of data containing global information and several multilingual units. • GI (Global Information) represents technical and administrative information applying to the entire multilingual data collection. • GroupC (Grouping Component) represents a sub-collection of multilingual data having a common origin or purpose within a given project. • MultiC (Multilingual Component) groups together all variants of a given textual content. • MonoC (Monolingual Component) part of a multilingual component (MultiC) containing information related to one language. • HistoC (History Component) this generic component allows tracing of modifications in the component it is anchored to (i.e. versioning). • SegC (Segmentation Component) A recursive component allowing any level of segmentation for textual information.

Any format compliant with this standard may use the MLIF metamodel in two possible ways: • By fully implementing the MLIF metamodel starting at the level of the <MLDC> component; • By specifically embedding MLIF compliant information within another model, by implementing one of the lower level MLIF component, namely <GroupC>, <MultiC> or <MonoC>. Relation with other standards As with TMF (Terminological Mark-up Framework; ISO 16642:2003; http://www.loria.fr/projets/TMF/) used for terminology, MLIF introduces a metamodel in combination with chosen data categories as a means of ensuring interoperability between several multilingual applications and corpora. MLIF deals with multilingual corpora, multilingual fragments, and the translation relations between them. In each domain where MLIF can be used, we may consider a specific granularity of segmentation and description, built on MAF (Morphosyntactic Annotation Framework; ISO DIS 24611; http://lirics.loria.fr/doc_pub/maf.pdf), SynAF (Syntactic Annotation Framework; ISO DIS 24615; http://lirics.loria.fr/doc_pub/SynAF_LREC2006.pdf) and TMF, for morphological description, syntactical annotation or terminological description respectively. Supporting the construction and the interoperability of localization and “Translation Memories” TM resources, MLIF also deals with the description of a metamodel for multilingual content. MLIF does not propose a closed list of description features. Rather, it provides a list of data categories, which is much easier to update and extend. This list represents a point of reference for multilingual information in the context of various application scenarios. However, MLIF not only describes elementary linguistic segments (i.e. sentence, syntactical component, word, part of speech, etc.), but it may also be used to represent document structure (i.e. title, abstract, paragraph, section, etc.). In addition, MLIF allows using external and internal links (i.e. annotations and references). 10


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 11

The MLIF is being designed with the objective of providing a common platform for all the existing tools developed by the groups listed in the introduction section. It promotes the use of a common framework for the future development of several different formats: TMX, XLIFF, etc. MLIF can be considered as a parent for all these formats, since all of them deal with multilingual data expressed in the form of segments or text units. They can all be stored, manipulated and translated in a similar manner. E-learning scenarios: the Language Academy We would like to present now some non-intrusive and dynamic e-learning based scenarios on multilingual textual information within an immersive 3D environment. We refer to these scenarios as non-intrusive because they do not interrupt the user activities within the immersive 3D environment. Rather, they enrich his/her individual experience. Obviously, these scenarios need to be dynamic, because user interaction occurs mostly in real-time situations. We would like to illustrate that, in the context of immersive 3D environments, dealing with multilinguality is more than just localization and real-time automatic translations. The General Scenario Pierre is a French student, who speaks English fluently but recently moved to the Netherlands. Therefore, he decides to learn Dutch through various ways. The following scenarios represent Pierreâ&#x20AC;&#x2122;s evolution phases for virtual world usage in order to learn Dutch. Scenario 1: Pierre alone, at home This scenario represents the already existing solutions in the language learning domain. At the beginning, Pierre learned the grammatical ground knowledge using book learning methods or through group or private Dutch lessons given by a teacher. After acquiring strong ground knowledge, Pierre decides to go further and to learn this new tongue by watching Dutch movies with French or English subtitles. His media player with multilingual support enables him to pause anytime in order to obtain the Dutch translation of the displayed subtitles. He can also have the audio part corresponding to one subtitle played back, or he can search the web for some definitions and synonyms of a single word â&#x20AC;&#x201C; using specific web services (Online databases like WordNet: http://wordnet.princeton.edu/ or ConceptNet: http://conceptnet.media.mit.edu/). If the Dutch subtitles are not available, a translation web service may generate them from the French or English ones. It would be interesting to be able to click directly on the subtitles (in order to get synonyms, definitions or translations) without needing to use a web browser and in order to sensibly improve the interactivity. This functionality is not currently available but is conditionally conceivable. Pierre can also get information about the Dutch culture, thanks to on-line resources like Wikipedia, for example. This possibility is particularly useful for understanding some subtle scenes.

11


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 12

Nevertheless, this learning method has several weaknesses: • Pierre only gets information without having any opportunity to use his knowledge for speaking or writing. Therefore, it is a one-way learning. • The resources (movies and subtitles) may be limited, on the one hand by their costs, and on the other hand, by Pierre’s cinematographic tastes. That is why Pierre will take the plunge and look into virtual worlds, in order to discover more interactive and more immersive learning methods. Scenario 2: Pierre alone, in a virtual world Pierre now discovers the virtual worlds. He hears about the very accurate modeling of the historic museum of Amsterdam and decides to teleport there in order to take a virtual tour. At his arrival, Pierre is offered a flying carpet to take him through the museum. The flying carpet is actually a robot which communicates through the chat interface in the virtual world (see Figure 3). It can talk and understand what Pierre says. Pierre will now virtually visit the museum, riding and directing the flying carpet. At first, Pierre asks the carpet in Dutch to have a brief guided tour. The sentence written by Pierre is analyzed and then the carpet flies through some museum key-places. At each place, the carpet gives Pierre a description of the place where he is or a description of some works. Pierre can also ask the carpet to fly to a given place. Then, the carpet computes the shortest way to go there. It is a means of conveyance which is more efficient than walking and which enables Pierre to use his Dutch knowledge in order to express himself. If he wants to, Pierre can ask, in Dutch, for more accurate information about the works he discovers. The carpet then extracts the date, the author or the story of the work from the available data stored on a web server.

Figure 3: The flying carpet in Second Life

While visiting the museum, Pierre can also click on any work and get its complete description. All the textual information which Pierre gets is clickable. So, he can get translations of any word group, or get synonyms and definitions of a word.

12


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 13

During Pierreâ&#x20AC;&#x2122;s visit, the server memorized the works about which Pierre asked for information. At the end, Pierre is offered a quiz which has been established according to the concerned works, which is written in Dutch too. Through the museum visit, Pierre can start expressing himself in Dutch and learn a lot about Dutch culture. These two elements are essential for language learning. Scenario 3: Pierre participating in a group to learn Dutch, in a virtual world Pierre is now on an island in the virtual world, where conferences are held in Dutch. In particular, there are many documentary movies about the Netherlands, directly presented in the virtual world by their respective directors. Many people from several countries who wish to learn Dutch meet on this island in order to improve their skills. In fact, the directors are native Dutch speakers and they can talk in the virtual world using VoIP technology. On the server, a software application analyzes the voice information and transcribes it into text format for the avatars who are attending the conference. Doing so, the avatars may at once read the speech and listen to it, so that they can easily understand and assimilate the Dutch language. Moreover, once the speech is transcribed into text, it may be translated by an online tool into each avatarâ&#x20AC;&#x2122;s tongue. As a consequence, one speech may be listened to in Dutch and read in various languages. As before, it is possible to click on one word to get synonyms, definitions or translations. The attendees at one conference may also ask questions to the speaker if they virtually put up their hands. They may do it in different ways: through the chat-based communication system or directly with VoIP. In order to practice, they often prefer expressing themselves in Dutch. Moreover, the same transcription and translation system as before is available to the speaker (the movie director), so that he can understand questions asked in other languages than Dutch. Figure 4 shows how Pierre may see such a conference.

Figure 4: Virtual conference with voice transcription and translation (Wikimedia, Wikipedia)

13


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 14

Finally, once again in order to facilitate language learning, the Dutch text may be analyzed by a grammar parser. Doing so, a user may for example configure their display so that all the verbs are displayed in a red font color. Scenario 4: Pierre with Dutch people, in a virtual world Pierre has now improved his Dutch level a lot. He knows the Dutch culture much better and has heard many native Dutch people talking. Nevertheless, his skills are not perfect yet and he now wishes to talk in real time with Dutch people. That is why he teleports to one of the most popular virtual islands in the Netherlands. He can now talk through the chat interface with native Dutch people. The virtual world makes the situation less alarming than in the real world so that Pierre feels more comfortable during the talks. He does not feel as nervous as he usually is in real life, particularly when talking in a foreign language. The virtual world also makes the chat-based talk more immersive than the usual chat interfaces, as it offers the users a common environmental context. Pierre is going to meet Dutch people, and suggests that they watch a movie together in a virtual cinema. Pierre may display on his screen the subtitles in a selected language whereas his virtual friends do not need them. He can also talk about his impressions concerning the movie or about the cultural differences. In that case, the displayed subtitles are clickable. But Pierreâ&#x20AC;&#x2122;s Dutch knowledge is still not good enough to understand everything he hears or reads and he still needs to click on some words in order to get synonyms, translations or definitions in real time. Sometimes, he also enables the syntax coloration, for example in order to highlight the verbs displayed in the chat window of his local user interface (see Figure 5).

Figure 5: Red font color for the verbs displayed in the chat interface

If he wants to, Pierre can also use VoIP technologies to talk with the avatars in place, using the same transcription technology as in Scenario 3. Although he is a bit shy, Pierre talks easily and understands the talks better thanks to the textual transcription. 14


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 15

Every time Pierre clicks on one word, the system memorizes the searched information. Doing so, Pierre can delay dealing with the words and expressions which where a bit difficult to him. This is particularly useful if Pierre has not had time to consult the information right away. This functionality leaves Pierre more time to get focused on the talk in the virtual world. The sentences containing the searched words or expressions are also kept in memory, so that Pierre can retrace his searches back to the original context and have a usage example for each word. Required system components In this part, we are going to explain the different technologies required for each scenario presented in the previous section. Some components are transversal to the described scenarios and implement their own mechanisms, and some other components have already been defined before. To make things easier to follow, we will first make a list of the required system components and then associate them with each scenario in the following section. Figure 6 shows the relationship between all the system components that we are going to explain further.

Figure 6: Relationship between the system components

A multimedia player The needed multimedia player should support video files with synchronized textual subtitles, in various languages. As each word or sentence in the subtitles should be clickable, the corresponding lexical units have to be representable in the text files containing the subtitles. The player links the subtitles with the corresponding audio part (using their corresponding time signature), so that the user can ask to hear the audio part corresponding to a given subtitle. This player can display and control subtitle streams in order to be able to change the subtitle language while the video is running. That is why the subtitles should be linked together in order to be able to switch from a language to another. This functionality is enabled by MLIF. 15


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 16

As a consequence, it is necessary to have a data structure which represents both time information and language equivalences. A web service linked to external resources This service uses external resources which already exist on the web and allows one to work easily with up-to-date information. It relies on services like WordNet (definitions, synonyms), Reverso (word, sentence or paragraph translations) or Wikipedia (cultural information). The way those services are structured should make the needed information easy to extract. The web service should also be able to provide information which would not only be indexed to nouns, but also to cultural singularities. A virtual guide robot We are now talking about a robot having the appearance of a flying carpet, which moves in a marked network according to the visitorâ&#x20AC;&#x2122;s orders. The markers (nodes of the graph representing the network) need to have names in various languages, including Dutch in our example. The markers have to be linked together in order to represent the graph edges. The robot program communicates with a web service which includes a dynamic and multilingual grammar parser (depending on the marker names), in order to make accurate syntax and semantic analysis. The implemented grammar manages several kinds of orders: moves and information inquiries. The least-cost path computing, the management of the links between markers, the marker names in various languages, the information display and the syntax and semantic analysis are relocated on a PHP/MySQL server working as a web service in order to make the processing faster. That means that the virtual world should also be able to send HTTP requests. As for the information inquiry (date, type, author), the text data describing the works located in the virtual world should be structured in such a way that the required information may be extracted easily. To do so, the virtual world textual data has to be formalized. The resources to fill in the descriptions might also be taken from online encyclopedias if the relevant information is standardized too. It should also be possible to obtain information about one work by clicking on it. This information may be displayed directly in a web browser but also in the graphical virtual assistant detailed further. The information elements may be sorted according to the inquiries of the visitor and that is why they should also be standardized. The data describing the works is stored on a remote server which is accessible through the web service.

16


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 17

A voice recognition application This module allows the transcription of talks into text, which is then displayed to the listeners through the chat interface window or the graphical assistant. Matching voice and text allows several kinds of learning: • Phonetics: to be able to identify the sounds of a tongue (perception and discrimination); • Lexical: to learn the word spellings; • Morphosyntax: agreement of adjectives, nouns, past participles (depending on the language); • Semantic: understanding words according to their context. In order to get a better voice recognition rate, the conferences will be thematic and tools like Viavoice, Dragon Naturally Speaking and Freespeech will be used depending on each one’s specialty. A graphical virtual assistant This graphical assistant, dedicated to language learning, is an interface which is integrated to the virtual world. This interface is available for each user who would like to use the functionalities of the Language Academy. It encompasses the following modules: • A movie recorder (in order to watch the virtual conferences again); • A web browser or alike, to display the information relative to the works in a museum, the results of a search when clicking on one word and the search history (storage with MLIF format in the database); • A voice recognition module – which may be an external stand-alone component – in order to transcribe into text the talks of the speakers (Scenario 3) and of the Dutch people Pierre talks to (Scenario 4). An interactive chat interface The chat interface of the virtual world has to be flexible enough in order for the user to be able to configure syntax coloring and to click one word if he wants to. Syntax coloring can be made, as the user wishes, in the chat window and/or in the graphical assistant. All of the obtained text data is clickable, in order to get information about the selected words or groups of words. Besides, a clicked word may be put back into its context (the sentence). Therefore, the granularity of the textual information (paragraphs, sentences, words, etc.) must be included in the standard. Since the chat-based talks have a time signature, it is also important to be able to match time information to each text element.

17


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 18

Discussion

Relationship between system components and scenarios Figure 7 shows the system components involved in each scenario. A checked box means that the corresponding scenario involves the corresponding system component.

Scenarios

System Components

1

2

3

4

Multimedia player

-

-

-

Web service

-

Virtual guide robot

-

-

-

Voice recognition application

-

-

Graphical virtual assistant

-

Interactive chat interface

-

Figure 7: Relationship between system components and scenarios

Theoretical and technical limits The functionalities of Scenario 3 and Scenario 4 mainly rely on a very good voice recognition quality. The current systems are mostly based on the use of linguistic corpora in order to improve the recognition quality. It means that, within Scenario 3, every conference should be associated to one available corpus. But this is not so easy to do in Scenario 4, as the talk themes can’t be defined in advance. Moreover, the translation of the transcripted talk into another language mainly depends on the voice recognition quality. The loss of accuracy caused by text automatic translation should also be taken into account. Therefore, depending on the vocabulary and expressions used, the results of the transcription and translation functionality may be more or less accurate. However, those limitations are not theoretical limitations. The research for this kind of application is advancing and that is why we think we can get better results in the coming years. Finally, all the scenarios are based on a certain representation of multilingual textual information. Therefore, the emergence of new multilingual textual information standards is necessary. The MLIF standard proposal has been developed in a similar vein in order to standardize the multilingual textual information representation and fully satisfies the needs of the scenarios presented.

18


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 19

Conclusion We have proposed some non-intrusive and dynamic e-learning based scenarios related to multilingual textual information within an immersive 3D environment. We have shown that in the context of immersive 3D environments, dealing with multilinguality is not only localization and real-time automatic translations. Immersive 3D environments may really change the perception that users may have of multilingual issues, in particular, in the field of language learning. The analysis of these scenarios leads us to propose a general architecture allowing immersive 3D environments that are able to deal with multilinguality in the most general and dynamic possible way. It should be noted that several scenarios have already been implemented (at least partially) in Second Life, and in Open Sim. All this work is being performed in the framework of the ITEA2 Project METAVERSE1 (ITEA2 07016). These scenarios have also led us to propose a list of minimal multilingual requirements to ISO’s MPEG-V working group. The requirements that must be fulfilled are: • Multilinguality: textual information must be managed, accessed, and represented by taking into account multilingual aspects. Here, some key technologies and standards are: Unicode, XML, etc. • Multilingual links: it must be possible to set up links between documents using different languages. This will allow us to represent localization and translation issues. Here, some key technologies and standards are TMX, XLIFF, MLIF, OAXAL, and ITS. • Linguistic granularity: it must be possible to represent textual information at several levels: paragraphs, sentences, words, syllables, part-of-speech, etc. Here, some key technologies and standards are those of ISO’s TC37/SC4 “Linguistic Resources Management”. • Time issues: it must be possible to associate time to multilingual textual information. A classical example in this domain is illustrated by captions or sub-titles related to video frames. Here, some key technologies and standards are W3C’s SMILText and Timed-Text, SRT, etc. • External links: it must be possible to set up links between multilingual textual information and some external knowledge databases such as Wikipedia, WordNet, ConceptNet, etc. Obviously, external links may also be used to translate textual information on a real-time basis. Here, some key technologies are represented by on-line translation tools such as Yahoo’s Babelfish, Google’s and Translator.

19


Journal of Virtual Worlds Research- Immersive 3D Environments and Multilinguality 20

Bibliography Automatic translation. (n.d.) In Wikipedia, the free encyclopedia. Retrieved from http://www.wikipedia.org Bulterman, D., Mullender, S., Cruz-Lara, S. (December 1, 2008). SMIL 3.0 smilText. Synchronized Multimedia Integration Language (SMIL 3.0). Retrieved from http://www.w3.org/TR/2008/RECSMIL3-20081201/smil-text.html Cruz-Lara, S., Bellalem, N., Ducret, J. & Krammer, I. (2008). Standardising the Management and the Representation of Multilingual Data : the Multi Lingual Information Framework. In Topics in Language Resources for Translation and Localisation. Editor Elia Yuste. John Benjamins Publishers. DCR. ISOcat – Data Category Registery. (n.d.) Retrieved from http://www.isocat.org/ De Freitas, S. (2008). Serious Virtual Worlds: A Scoping Study. The Serious Games Insitute. Declerck, T. (2006). SynAF: Towards a Standard for Syntactic Annotation. Retrieved from http://lirics.loria.fr/doc_pub/SynAF_LREC2006.pdf Gómez, C. & Pinto, M. (2001). La normalisation au service du traducteur. In META, XLVI, no 3, p. 564. Hurtado Albir, A. (1996). La enseñanza de la traducción. Castellón: Universidad Jaume I. ISO. ISO – International Organization for Standards. (n.d.) Retrieved from http://www.iso.org ITS. Internationalization Tag Set (ITS) Version 1.0. (n.d.) Retrieved from http://www.w3.org/TR/its/ Language Resource Management – Morpho-syntactic Annotation Framework (MAF). (August 22, 2005). Retrieved from http://lirics.loria.fr/doc_pub/maf.pdf LISA. LISA: Homepage. (n.d.) Retrieved from http://www.lisa.org MLIF. MLIF: MultiLingual Information Framework. (n.d.) Retrieved from http://mlif.loria.fr/ OASIS. OASIS: Advancing open standards for the global information society. (n.d.) Retrieved from http://www.oasis-open.org/home/index.php OAXAL. OASIS Open Architecture for XML Authoring and Localization Reference Model (OAXAL). (n.d.) Retrieved from http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=oaxal Pinto, M. (1993). Análisis documental. Fundamentos y procedimientos, 2a ed. rev. y auhm., Madrid, Eudema. Prinsengracht, Amsterdam. (February 17, 2008). Retrieved from http://commons.wikimedia.org/wiki/File:Prinsengracht_Amsterdam.jpg Savourel, Y., Reid, J., Jewtushenko, T., Raya, R.M. (February 1, 2008). XLIFF 1.2 Specification. Retrieved from http://docs.oasis-open.org/xliff/xliff-core/xliff-core.html The Globalization process. (October 2008). Retrieved from http://commons.wikimedia.org/wiki/File:Globalisationchart.jpg The Netherlands. (n.d.) In Wikipedia, the free encyclopedia. Retrieved from http://www.wikipedia.org TMF. TMF Webpage (n.d.). Retrieved from http://www.loria.fr/projets/TMF/ TMX. LISA: Translation Memory eXchange (TMX). (n.d.) Retrieved from http://www.lisa.org/Translation-Memory-e.34.0.html W3C. World Wide Web Consortium - Web Standards. (n.d.) Retrieved from http://www.w3.org

20


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Payback of Mining Activities Within Entropia Universe By Markus Falk, Inova Q Inc., Daniel M. Besemann, Hamline University and James M. Bosson, Active Capital Management Ltd.

Abstract In subscription-based virtual worlds the fee a user pays for participation is clear. However, in free-to-play worlds in which the provider's revenue is generated via micropayments made by participants using a real in-world currency, lack of transparency in the underlying mechanisms can make it difficult for an user to gauge the service fee being paid. This paper studies the payback of mining activities within the virtual world Entropia Universe, with an aim to determine the cost of participation in this activity for users adopting a range of play styles. Entropia Universe provider MindArk estimates the normal service fee for an active user averages $1 per hour and we compare our findings to this figure. We employ a statistical approach, based on a large number of data points acquired by two Entropia Universe avatars, to develop a theoretical mining-returns model. This model was used to make predictions about general cost and profitability for Entropia Universe miners, resulting in an estimated payout percentage of at least 91%. Thus, over a sufficiently long period a miner can expect the provider to return at least 91 cents for every dollar invested. We also consider the effects of player-to-player transactions on a miner's real return rate. Our methods could be used to analyse the economy of other activities within Entropia Universe and possibly activities in other virtual worlds.

Keywords: Entropia universe; virtual economy; participation cost; real cash economy; virtual world; MMOG.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Payback of Mining Activities 4

Payback of Mining Activities Within Entropia Universe By Markus Falk, Inova Q Inc., Daniel M. Besemann, Hamline University and James M. Bosson, Active Capital Management Ltd.

The cost of participation in a subscription-based virtual world (Bell 2008) is clear to the user. The fees the user pays to the provider are transparent and the user understands how much he or she pays for the service. In virtual worlds that do not rely on subscription fees to generate revenue, the cost of participation can be less transparent. This paper examines the cost of participation in one such virtual world—Entropia Universe. Entropia Universe is a virtual world that attempts to combine the gaming focus of a traditional Massively Multi-player Online Role-Playing Game (MMORPG), such as World of Warcraft, with the social and commerce foci of virtual environments, such as Second Life. Developers MindArk describe it as a “3D Virtual Environment for Online Entertainment, Social Networking and E-commerce using a real cash economy” (MindArk, 2008a). Entropia Universe is set in the distant future on a planet called Calypso, the first planet successfully colonised by humanity. Participants control custom-built avatars and can explore two continents on the virtual planet, visit orbiting space stations, hunt alien creatures, mine for resources, and craft tools, weapons, armour, and clothes. Social interaction and trade are important and the top societies feature avatars worth tens, even hundreds, of thousands of US dollars. Avatars exchange items for prices that depend on the item's base-value (the price for which the developers will buy the item), use-value (how effective or useful it is), and exchange-value (how rare it is, and its aesthetic appeal or value as a status symbol), resulting in an in-world turnover of over $400M in 2007 (Mindark, 2008b). Entropia Universe began development under its original name, Project Entropia, in 1995 and was commercially launched by Swedish developers MindArk PE AB in 2003 (MindArk, 2008c). It has over 800,000 inhabitants (MindArk, 2008d), although the majority of those are not active enough to have any impact on the economy. The virtual world is free to join and has no subscription fees. It features an in-world currency called the Project Entropia Dollar (PED) that is linked to the US dollar at a fixed exchange rate of 10 PED to $1, and revenue is generated by MindArk via participants making repeated micropayments (MindArk, 2008e) as they engage in the various activities available to them. MindArk describes Entropia Universe as a virtual universe, rather than virtual world, and views the product as an expandable platform upon which third party developers can construct and maintain worlds of their own that exist within the same financial structure. A number of such worlds are in development at the time of writing and may not have the same gaming focus as Calypso, possibly developing more upon the social networking and e-commerce possibilities offered by the platform (Beck & Zaas, 2008; Choudhury & Behrmann, 2008; http://www.nextisland.com; www.rocktropia.com). For the purposes of this paper, we consider only Entropia Universe in its form at the time of writing, that is the MMORPG based around planet Calypso, developed and maintained by MindArk and its subsidiaries (MindArk 2009).

4


Journal of Virtual Worlds Research- Payback of Mining Activities 5

On Calypso participants engage in RPG-type gaming activities in which each action has a small but real monetary cost, in the hope of receiving loot they can sell back to the provider or trade with other participants. Revenue is generated by MindArk in the form of micropayments from these activities at a rate they estimate to be $1 per hour (or $0.5 to $1.5), for the average user (MindArk, 2008d; MindArk, 2009a). To go hunting alien creatures for instance, an inhabitant would need a weapon, ammunition, armour, and healing tools. All of these must be bought for PED either from MindArk at base-value (basic items, via in-world terminals) or from other participants at base-value plus a negotiable markup (more advanced items). Any item can be sold back to MindArk for its base-value at any point. Whenever an item is used, a small cost will be incurred and the base-value of the item will decay. Thus if a participant uses a weapon its base-value will reduce, and when a participant gets hit by a creature any armour protecting against the attack will lose value. Eventually items become unusable and must either be repaired (in exchange for PED) or replaced. When a participant kills a creature it may yield loot with a real monetary value that can be traded with other participants, possibly for more than its basevalue. Whilst engaging in activities such as hunting creatures, the participant's avatar will also gain skills. These will enable the participant to perform actions more efficiently, and use higher level, more effective tools and weapons. Skills can also be traded with other participants in exchange for currency. A key part of the economy is based on the supply and demand of items within Entropia Universe. Whilst all items have an assigned base-value, an item's actual value may be much greater if other participants are willing to pay the owner more for it. Item valuations are largely based on the achievement, social, and immersive player motivations described by Yee (2006) and applied by Manninen and Kujanpää (2007). In the Entropia Universe specifically, the value of an item depends upon factors such as how useful it is (a high damage weapon is likely to be more valuable than a low damage weapon), how efficient it is (a weapon that is cheaper to use is likely to be more valuable than an otherwise equivalent weapon), its availability, and its value as a status item. All are important in determining the valuation. In-demand clothes, which have exchange-value but no real use-value, will generally sell for less than in-demand tools and weapons that serve a purpose. For instance, the rarest and most appealing items of clothing trade for hundreds of USD whilst the rarest, most powerful weapons and tools regularly change hands for tens of thousands of USD. When a participant engages in an activity there are generally three costs to consider–the system-generated decay of his equipment as he uses it, any markup he has paid to other participants for that equipment, and any tax he must pay on his finds. If the participant hunts or mines on land owned by another participant he will pay the owner a proportion of his finds. This results in a given percentage that is removed by the system from the base-value of any find and passed over to the landowner. He has three forms of return to consider–the base-value of his loot (the price for which he can sell it back to the system), any markup he could potentially make by selling it on to another participant, and the value of the skills he has gained through performing the activity. In terms of the service fee the user is paying to the provider, only the difference between the base-value expenditure and the base-value payout is important. All other returns or costs can be considered as trades with other participants. We study the system returns from one of the primary activities in Entropia Universe, mining for resources, with a view to determining how the result compares to the advertised average service fee of $1 per hour. 5


Journal of Virtual Worlds Research- Payback of Mining Activities 6

Mining Figure 1 is a visual representation of the mining process. In order to mine, a participant requires a tool called a finder, some mining probes or bombs, and another tool called an extractor. Additionally, it is possible to equip the finder with a mining amplifier that decays on its own and serves as a loot multiplier. There are two types of mining activities, enmatter and ore mining. For enmatter mining, a probe with a base-value of 0.5 PED is needed, whereas for ore mining a bomb with a base-value of 1 PED is used.

Figure 1: The mining process. A finder (eMINE OFS) is equipped and used with bombs (49 remaining) in the inventory (1). If a claim is found (2), a resource deed (3, 4) is placed in inventory. The finder points the avatar in the direction of the claim rod (5). An extractor is equipped and used on the claim rod (6). Each use of the extractor results in a stack of resources in inventory, until the claim is emptied (7).

At the chosen location, the finder is equipped and a probe (bomb) is released into the ground. The probe (bomb) searches within a certain radius before being expended and may or may not find a resource deposit. If a deposit is found, it must be extracted using the extractor. 6


Journal of Virtual Worlds Research- Payback of Mining Activities 7

The cost of the activity is the decay of the finder and extractor, the expenditure of the probe (bomb), and the decay of the amplifier (if used), with the vast majority of the cost residing in the probe (bomb) and amplifier. A single find consists of a deposit of one resource type expressed in units. Each different resource type has a base unit with a base-value (for instance, Gold is found as Gold stones, each stone having base-value 1 PED, and any number of stones can form a deposit). A deposit is found at a given depth (rarer resources tend to be found at lower depths) and has a given size. Deposit sizes are very variable and can range from around 0.3 PED to tens, or extremely rarely even hundreds, of thousands of PED. A find worth over 50 PED results in a fanfare and an announcement in global chat. A find that is amongst the largest hundred of the day is also entered into a Hall of Fame. The resources found can be sold back to the provider or to other participants, or used to craft items which can then be cycled through the economy. We perform a statistical analysis of mining returns and generate a view of the fee a miner pays the provider during the course of the mining activities. A model of returns is generated, consistent with our data sets, in order to simulate wide scale mining activity to get a view of how returns and fees look over the general mining community. Finally, we make some observations about how participants engaging in mining activities actually fare, after considering how markup on their finds could affect their results. Data and Methodology Data Collection Two avatars collected data for this study and recorded a total of 4,911 finds out of 18,086 attempts (â&#x20AC;&#x153;dropsâ&#x20AC;?). They began data collection independently and became aware of each otherâ&#x20AC;&#x2122;s work halfway through the acquisition process. Most data were collected between October 2008 and January 2009. Avatar A exclusively used probes on his mining runs, while Avatar B would used both bombs and probes. When a resource was found, both players recorded the resource type, the base-value of the claim in PED, amplifier used (if any), and the taxation applied (if any). These data were then compiled in spreadsheets for further analysis. Avatar A also recorded the finder and extractor used and the find rate (percentage of dropped bombs/probes that found a resource) for each run, along with other data not relevant to this work. Avatar B did not initially record finder or find rate data, but did begin collecting find rate data part-way through data collection. He then continued to collect find rate data after ceasing to record claim base-value, in order to provide a meaningful find rate comparison of the two avatars. Avatar B used a limited number of finders (with similar properties) and so an estimate of finder costs can be made. To estimate drilling costs a small dataset was provided by a third avatar (Avatar C) containing 152 drilling attempts using the least decaying extractor available on enmatter finds with a base-value of 0.01 PED. Loot Values There are many ways to visualize individual loots. One such way is with the survival function. For a given loot value (x-axis), the survival function shows the probability (y-axis) that a loot larger than the given value will be obtained. The survivor function has a value of 1 7


Journal of Virtual Worlds Research- Payback of Mining Activities 8

(100%) for the smallest loot, and a value of 0 (0%) for the largest loot. To provide a nonparametric estimate for the survival function the Kaplan-Meier estimator (Kaplan & Meier, 1958) was used. In order to combine finds from enmatter and ore mining, taxed and untaxed finds, as well as amped and unamped finds, loot has been standardized accordingly. Figure 2 shows the survivor function for low base-value resources found without using an amplifier comparing enmatter and ore finds before and after standardization. Loot Classes A number of conclusions can be drawn from Figure 2. First, loot is subdivided in classes. Each loot class has a fixed width, and there are gaps between loot classes. Second, the linearity of the survival function for each class means that the distribution within a loot class is uniform, i.e., there is an equal chance to loot between 0.50 PED and 0.80 PED of crude oil when loot comes from class 1. Third, the probability of receiving a class 1 find is between 0.4 and 0.5 for both enmatters and ores, since the survival function for class 1 extends from 1.0 to between 0.6 and 0.5 (i.e., 1.0 â&#x20AC;&#x201C; 0.6 = 0.4). Analogous conclusions can be made for other classes. Fourth, the loot classes for ores are, within experimental uncertainty, twice the value of the enmatter classes. This doubling of the loot class value is due to the cost per drop: enmatter probes cost 0.5 PED, while ore bombs cost 1.0 PED. Similarly, the effect of a mining amplifier as well as taxation can be considered, so that observed loot can be standardized in order to have one combined dataset.

Figure 2: Kaplan-Meier estimate of survival function for mining loot according to resource type before and after standardization. Estimated survival functions on a log3 scale for untaxed and unamped enmatter loot in PED (n = 540, green line) as well as observed and standardized ore loot (n = 566, observed - blue line, standardized - red line). Enmatter loot is significantly different from ore loot before standardization (p < .001, Log-Rank test) but this difference disappears after standardization (p = .943 Log-Rank test). Similarly, it can be shown that the effect of taxation or the use of amplifiers disappears after standardization.

8


Journal of Virtual Worlds Research- Payback of Mining Activities 9

Consideration of all data confirms that loot classes do scale linearly with the cost per drop. Most data from Avatar A involved enmatter mining with a Matter Amp 104 (MA-104). The MA-104 decays 1.5 PED per drop, plus the 0.5 PED probe for a total nominal drop cost of 2 PED (nominal costs do not consider finder and extractor decay). Loot classes from Avatar A were observed to be four times larger than Avatar B's unamped probe data. Given this scaling effect, the data from any drop can be normalized by dividing by the number of 0.5 PED probe equivalents. Unamped ore bombs (costing 1.0 PED each) are therefore divided by 2, while a MA-104 probe drop is divided by 4. Statistical Analysis Continuous data are expressed as mean and standard deviation, counted data as frequencies. For estimation of survival functions, we used the Kaplan-Meier estimator (Kaplan & Meier, 1958). Comparisons of survival functions have been carried out by means of the Logrank test (Mantel, 1966) and for differences in frequencies between groups the Chi-Square test was used. To estimate payout percentage, defined as percentage of returned money with respect to invested money, we derived a loot model for loot classes identified via the survival function. Expected loot consists of loot class means, estimated by means of linear regression on log-transformed loot values, and loot class weights according to the observed frequencies of loot classes. Mean cost per drop was estimated separately using a small set of recorded extraction data. If not otherwise possible, confidence intervals have been assessed by means of boot strapping. Using Monte Carlo methods we further assessed variability in payout percentage between different participants. A p-value less than .05 has been considered as significant and SPSS速 16.0, Matlab速 7.6 and Microsoft Excel速 2007 were used for statistical analysis.

Results Payout Percentage Avatar A has recorded a total of 1,998 amplified enmatter finds out of 7,360 dropped probes resulting in a find rate of 27.1%. After Avatar B began collecting find rate data, he dropped 1,380 bombs (373 finds) and 2,240 probes (611 finds), giving find rates of 27.0% and 27.3%, respectively, not statistically significantly different from each other (p = .90, Chi-Square test). Furthermore, the combined find rate of 27.2% for Avatar B is not significantly different from Avatar A (p = .99, Chi-Square test) and therefore overall observed find rate is estimated as 27.2% with a 95% confidence interval ranging from 26.3% to 28%. A summary of all finds used for analysis is given in Table 1 and the observed survival function is depicted in Figure 3.

9


Journal of Virtual Worlds Research- Payback of Mining Activities 10

Table 1: Finds according to mining activity, utilized amplifier, taxation and avatar. Mining activity Enmatter

Amplified no

Taxed no

Avatar A

yes yes Ore

no yes

Total

Avatar B 540

Total 540

175

175

no

1,622

429

2,051

yes

376

561

937

no

566

566

yes

252

252

no

221

221

yes

169

169

2913

4911

1998

From the estimated survival function we have identified visually the respective loot classes and calculated mean loot value per loot class and loot class frequency. About 95% of finds are within or below loot class 2 and, in 5% of cases, loot will fall in one of the higher loot classes (Table 2). Loot is therefore heavily right-tailed. Furthermore, we do not have data for all loot classes yet. For instance, we have excluded one ore find with a base-value of over 12,000 PED, corresponding to a loot class 9 or class 10. Finds of this type are very rare and therefore we were not able to collect a reasonable number of finds for identification of respective classes. Loot classes one to six are however sufficient to estimate a minimal expected payout percentage. The given numbers are, however, subject to estimation error and therefore the sampling errors of loot class means, loot class frequencies, and find rate do need to be quantified in order to get a reliable estimate of the payout percentage.

Figure 3: Kaplan-Meier analysis for mining loot according to standardized loot. The x-axis depicts the logtransformed standardized loot using the logarithm with base 3. For every value on the x-axis the y-axis gives the cumulative probability of a find in base loot above this respective value. Identified loot classes have been colored and numbered from C1 to C4.

10


Journal of Virtual Worlds Research- Payback of Mining Activities 11

Using linear regression, we were able to predict the log transformed loot class means from loot class numbers (see Figure 4), implying that the loot from one class to the next increases by a factor of three. From this we can conclude that loot classes are intentionally designed by the provider and that the loot class means from the regression shown in Figure 4 are the true ones and not further subjected to estimation error. Observed relative frequencies are, however, still estimates and therefore imprecise and have been adjusted accordingly (Table 2). To calculate payout percentage it is necessary to know the sustained costs per find. Total cost per standardized find is composed by 0.5 PED for the probe plus 0.01 PED as finder decay (we use the lowest decaying finder here) and a variable amount of extraction costs depending on the number of found units. From the data provided by Avatar C, the mean extracted number of units per extractor use for resources with a base-value of 0.01 PED is 24 units Âą 4 units (standard error of the mean 0.34 units). The decay of the lowest decaying extractor is 0.0033 PED. In Table 2 we summarize results for the estimated cumulative payout percentage. Table 2: Identified loot classes for standardized loot with respective model means and frequencies (weights) as well as respective cost and cumulative payout percentage Class

n

Observed Mean (PED)

Observed Freq. (%)

Model Mean (PED)

Model Freq. (%)

1

2295

0.64

46.74%

0.66

Cumulative Cumulative Cumulative Mean Payout Mean Costs Payout (PED) (PED) Percentage (%) 47.00% 0.084 0.513 16.3%

1.66

677

1.35

13.79%

1.36

14.00%

0.135

0.514

26.3%

2

1686

1.97

34.34%

1.98

34.50%

0.320

0.520

61.5%

3

166

6.39

3.38%

5.94

2.964%

0.367

0.521

70.5%

4

64

18.20

1.30%

17.82

1.143%

0.422

0.521

81.0%

5

19

47.40

0.39%

53.46

0.339%

0.471

0.521

90.4%

6

3

164.73

0.06%

160.38

0.054%

0.495

0.521

94.9%

Model Mean is calculated using the formula given in Figure 4. Model Frequency gives adjusted observed weights in order to achieve a reliable lower limit for payout percentage; observed weights were rounded in the following manner: weights of loot classes 1 and 1.66 have been rounded upwards to the next percentage point and loot class 2 to the next half percentage point. This gives a cumulative relative frequency of 95.5% for classes 1 to 2, which corresponds to the respective upper limit of the 95% confidence interval for the combination of those three classes. The remaining classes have then been proportionally sized down, to give a total of 4.5%. Cumulative Mean Payout assumes an overall find rate of 27%. Cumulative Payout Percentage is calculated as Cumulative Mean Payout divided by Cumulative Mean Cost. Mean Cost was calculated as 0.51 PED (0.5 PED for probe and 0.01 PED of finder decay plus a variable amount of extractor decay depending on the loot class mean, leading to cumulative costs per find ranging from 0.513 to 0.521 PED.

11


Journal of Virtual Worlds Research- Payback of Mining Activities 12

Figure 4: Linear regression analysis of log transformed loot (y-axis) on loot classes (x-axis). The linear relationship is clearly evident and therefore resolving the linear equation log3(y) = x â&#x20AC;&#x201C; 1.38 for y, leads to y = 0.22 *

x

3 . Hence observed loot for a specific class is based on a base loot value of 0.22 PED multiplied by the loot class number raised to the power of 3.

An avatar will usually see loot from loot classes 1 to 2 and get back about 60% of its investments, barring a below average find rate. In 4.5% of cases, loot will fall into one of the higher loot classes leading to a cumulative payout percentage of about 95% assuming a find rate of 27%. As find rate is subject to sampling error, overall payout percentage would be 91% or 98% using the lower (26%) or upper (28%) confidence limits of the estimated find rate, respectively. The estimated payout percentage of 95% is expected to be observed over the long run. As loot from higher loot classes is rare, this could take many finds to achieve. We therefore simulated mining runs of different avatars using the loot model from Table 2 assuming an entirely random draw; the results are depicted in Figure 5. Simulating 10,000 mining runs with 1,000 drops per run shows a highly variable payout percentage between avatars. About 30% of the runs will have a payout percentage equal or higher than 100%, thus managing a base-value profit. Only with a very large number of drops does the variance decrease and the expected payout percentage of about 95% is achieved by nearly every avatar. This also implies that different avatars might get a completely different impression about the loot system, when doing a low number of drops.

12


Journal of Virtual Worlds Research- Payback of Mining Activities 13

Figure 5: Simulated mining runs with estimated survival function of payout percentage. The x-axis depicts the cumulative payout percentage and the y-axis shows probability to observe a cumulative payout percentage greater or equal the given value on the x-axis. Using the loot model from Table 2 a given number of drops per run have been simulated. Thereafter, loot and cost have been summed up and the cumulative payout percentage per run has been calculated. The three different simulations all lead to the same mean payout percentage of 95% but their survival functions are clearly different, implying a higher variance with a lower number of drops.

Cost Per Hour, Provider's Perspective From the provider's perspective, the only costs that matter are base-value costs. Taxation and markup can be ignored, as these are merely transactions between players. Our estimated payout percentage of 95% implies that the provider (MindArk) retains, on average, 5% of the money spent on mining activities, thus returning to the player 95 cents (minimum 91 cents) for $1 played. From this perspective, the activity of mining is comparable to slot machines, where a spin costs a certain base-value and there is a long-term average payout (of 95%, in this case). This analogy breaks down when considering that found resources can be sold to other players with markup, discussed in the next section. To compare to the stated $1/hour (10 PED/hour) cost to play, we must consider different play styles. Collating the experiences of many miners, a rate of 100 drops per hour is a reasonable estimate, though some avatars may drop more or less. The largest differences in play style come from the choice of enmatter (probes) and/or ore (bombs), as well as the size of amplifier used. Table 3 illustrates scenarios for different play styles, starting with the least expensive (unamped enmatter probing, nominal cost 0.5 PED per drop) and ending with the most expensive (ore bombing amplified with an OreAmp OA-109, nominally 21 PED per drop). It is clear that an average miner expending a nominal 2 PED per drop will provide the provider with 13


Journal of Virtual Worlds Research- Payback of Mining Activities 14

its stated income by expending 208 PED/hour, with approximately 198 PED/hour returned to him in the form of resources, and the provider pocketing the remaining 10 PED/hour. Other miners with different play styles may lose as little as $0.26/hour or as much as $11/hour. Readers should note the rarity of high-end amplifiers and time spent extracting larger claims may impact the likelihood of achieving the latter extreme. Table 3: Estimated base-value costs per hour. Nominal Cost/Drop (PED) 0.5

Example Setup Probe, no amplifier

Gross Base-Value Outlay/Hour (PED) 52

Average Net Base -Value Loss/Hour (PED) 2.6

1

Bomb, no amplifier

104

5.2

2

Probe with MA-104

208

10.4

4

Bomb with OA-104

416

20.8

8

Probe with MA-108

832

41.6

21

Bomb with OA-109

2184

109.2

Nominal cost/drop does not include finder and extractor decay, for simplicity. Gross base-value expenditure/hour assumes 100 drops per hour and 0.02 PED of finder and excavator decay per 0.5 PED nominal expenditure. Average net base-value loss/hour assumes a 95% base-value payout percentage.

Cost Per Hour, Player's Perspective While the provider is consistently collecting between 26 cents and $11 per hour from all miners (stated average of $1/hour), players are also competing against each other for funds that remain in the system (Lehdonvirta, 2005). The following discussion illustrates this zero-sum competition, which can result in some players withdrawing substantial funds from the Entropia Universe, while others continue to deposit into the system, having lost significantly more than the 5% removed by the provider. From a miner's perspective, base-value is not the only cost that impacts his real return. Taxation and markup also play a significant role. As was mentioned, taxation and markup values amount to transactions between players. For a miner to break even or profit over the long term, the amount of markup gained from selling found resources must be greater than or equal to the sum of the base-value loss plus taxes and markup paid for the mining tools: Markup (resources) >= base-value loss + taxes + markup (finder, amplifier, excavator) Taxes are expressed as a percentage. Land area taxes are usually between 3% and 5% (we assume 4%) of base-value, meaning that of the 95% average payback, the miner receives approximately 91%, and the land owner (another player) receives 4%. Of course, there are many untaxed areas on Calypso that can be mined, but many land areas offer higher concentrations of rare resources and are mined regularly for these resources (and their high markup).

14


Journal of Virtual Worlds Research- Payback of Mining Activities 15

We define item markup as a percentage above base-value. Miners pay markup on some finders and extractors, as well as most amplifiers. We assume that repairable finders and extractors are used. These tools have no markup per-use, and thus our calculations are simplified. It should be noted, however, that some non-repairable finders have significant markup (over 100%, meaning the finder must be purchased from another player at twice the base-value) which can affect real returns, especially when mining unamped. The main source of mining markup is the amplifier, which currently varies from less than 5% for low-end amplifiers (101 amplifiers) to 50%-100% for the high-end amplifiers (107-109 amplifiers). Markup on resources sold by miners is even more variable. The most common and least useful resources have markups of less than 5%, while the rarest useful resources command markups of 1000+%. Most of the latter are rare finds, even with the right equipment and knowledge, or have caps that limit the claim or class size that can be found, even with the use of large amplifiers. The markup on most resources is between 10% and 50%. The skills gained while mining can also be sold for a markup to other avatars looking for a quick upgrade. As most avatars choose to keep their skills, we do not consider the market value of skill gains in this analysis. Tables 4 and 5 show scenarios similar to those shown in Table 3, for various resource markups. Table 4 expresses real return as a percentage of real outlays, while Table 5 expresses real profit or loss as PED/hour. It is clear that play style has a more profound effect on real returns than it does on base-value returns. Those who enjoy playing â&#x20AC;&#x153;bigâ&#x20AC;? risk losing upwards of $100/hour while bombing with an OA-109 (with all but ~$11/hour going to other players), despite the appearance of good fortune (larger finds that are repeatedly announced in global chat). Others can break even (over the long run) or even make profits approaching those of a minimum wage job, if they choose the right equipment and use it to consistently find higher markup resources. Achievement of the returns in the 60% markup column are likely only achievable, if at all, by the most disciplined and knowledgeable miners, as the resources that can provide this type of return are uncommon finds and often cannot be found in large quantities. The 40% column is consistently approachable by knowledgeable miners, at least under favorable market conditions. It is important to restate that any profits, on average, are earned entirely from other players with different play styles and/or in-world professions (for example crafters).

15


Journal of Virtual Worlds Research- Payback of Mining Activities 16

Table 4: Estimated real returns as a function of mining setup and resource markup. Nominal Cost/Drop (PED)

Setup

Gross Real Outlay/Drop (PED)

Return (%) for Indicated Average Resource Markup (%) [taxed land]

5% [9.8%]

20% [25%]

40% [46%]

60% [67%]

Probe or bomb, 0.52, 1.04 0% 14% 33% 52% no amplifier Probe with MA1.11 -7% 7% 25% 42% 1 102 Probe with MA2.38 -13% 0% 16% 33% 2 104 Bomb with OA2.26 -8% 5% 22% 40% 2 102 Bomb with OA4.82 -14% -2% 15% 31% 4 104 Probe with MA13.95 -40% -32% -21% -9% 8 108 Bomb with OA36.84 -41% -32% -21% -10% 21 109 Gross real outlay per drop assumes the same base cost as in Table 3, plus the following markups on amplifiers (as of 9/17/2009): MA-102, 14%; OA-102, 18%; MA-104, 20%; OA-104, 22%; MA-108, OA109, 75%. Average base-value payout percentage is assumed to be 95%. Effects of a 4% land area tax are illustrated by replacing the listed resource markups with those in brackets (i.e. a higher resource markup is needed to achieve the same real return from a taxed land). 0.5, 1

Table 5: Estimated profit (loss) per hour as a function of mining setup and resource markup.

Nominal Cost/Drop (PED)

Setup

Gross Real Average Net Profit (Loss) (PED/Hour) for Outlay/Hour Average Resource Markup (%) [taxed land] of: (PED) 5% 20% 40% 60%

[9.8%]

[25%]

[46%]

[67%]

52, 104

0, 0

7, 15

17, 34

27, 54

Probe with MA-102

111

(8)

8

28

47

2

Probe with MA-104

238

(31)

0

38

79

2

Bomb with OA-102

226

(18)

11

50

90

4

Bomb with OA-104

482

(67)

(10)

72

149

8

Probe with MA-108

1395

(558)

(446)

(293)

(126)

21

Bomb with OA-109

3684

(1510)

(1179)

(774)

(368)

0.5, 1

Probe or bomb, no amplifier

1

Gross real expenditure per hour determined as in Table 4 and assuming 100 drops per hour. Average basevalue payout percentage is assumed to be 95%. Effects of a 4% land area tax are illustrated by replacing the listed resource markups with those in brackets.

16


Journal of Virtual Worlds Research- Payback of Mining Activities 17

A comparison of Tables 4 and 5 reveals that the highest percentage returns do not necessarily provide the best per-hour returns. Further, as markups on both mining equipment and mined resources are constantly changing, real personal returns also vary, and the informed miner may need to change his play style. As an example, we might assume that mining equipment markup remains unchanged, while resource markup fluctuates. A miner who is able to consistently find resources averaging 40% markup might maximize profits by mining with bombs and an OA-104, achieving 72 PED/hour average real return. If the average resource markup suddenly drops, such that the miner can only average 20% markup on his finds, then this hypothetical miner is in trouble. He is well-advised to begin bombing with no amplifier, as his previous play style would now be costing him 8 PED/hour, while unamped mining might still yield a small profit. Risks such as these are found throughout the Entropia Universe, and interested players can and do spend a significant amount of time analyzing market values and adapting to changing conditions. It is also true that if markup on all items dropped to 0% (meaning that items sold for base-value only), then miners and those in similar professions (hunting and crafting, assuming these professions have similar payout percentages) would all have, on average, a real return of approximately -5%, paid to the provider, with the real cost/hour depending on a player's expenditures/hour. This can be considered the entertainment value of the Entropia Universe experience.

Discussion Limitations The present study has several limitations that need to be taken into consideration. For instance, we have limited cost estimation to basic equipment. Other tools exist in-game with higher decay, offering faster extraction or enabling finds at greater depth. It is not well known if and how this additional decay is accounted for. Furthermore, we did not address rounding as loot is always converted to whole units of enmatter or ore. As rounding is only basically understood at this time, we have allowed fractions of units in our model, therefore only approximating the real observations. Moreover, although loot classes above 6 do exist our loot model truncates at class 6. However, as our main interest was to give a reasonable lower limit of the expected payout percentage and to show its basic characteristics, the mentioned limitations have only limited implications. Throughout the document we have assumed that loot follows an entirely random process. Although we were able to identify a loot model, its implementation remains hidden. As we have seen from the simulation runs in Figure 5, payout percentage has a large variance and that there is even the possibility to achieve a base-value profit over the short term. On the contrary there is also the possibility for a payout percentage quite lower than 95%. As every random payout system tends to behave in this way, we leave open the possibility that the provider might have implemented a less random avatar-based correction mechanism, undetected by this study, that periodically adjusts an avatar's loot toward the 95% long-term payout percentage.

17


Journal of Virtual Worlds Research- Payback of Mining Activities 18

Finally, while no significant changes have been observed since these data were collected, it is true that the provider can modify the loot system at any moment without the need to inform participants, and therefore a constant monitoring of payout percentage might be necessary. Community Reactions The Entropia Universe community, in general, is a mature community that does not mind paying $1/hour for its entertainment. In fact, there is much positive reaction to the pay-to-play model, since time spent offline is free, in comparison to subscription-based formats. The community also appreciates the diverse array of activities available, going well beyond the main activities of mining, hunting, and crafting, some of which can generate revenue from other players, if pursued. There is a fair amount of negative feedback on the community forums, however. In general, the negative feedback with regard to cost to play takes one of two forms. First, there comes a point at which it is nearly impossible to advance in a particular profession without expending significantly more than 200 PED base-value per hour (thereby generating more than 10 PED/hour in income for the provider). When players reach this point, they are left with a choice: either continue playing at their current level and risk monotony, or invest more money into the system in order to escalate the level of challenge experienced. It is at this point that the cost to play can become significantly higher than that of most subscriptionbased games for a similar level of challenge, and players become frustrated at their inability to advance at a reasonable cost. Second, as has been shown, the loot distribution for mining (and indeed, hunting and crafting) is heavily right-tailed, with 4-5% of finds accounting for 35% of the payout. This distribution leads to some interesting threads and conversations on the community forums. When a player receives a large loot, such as the 12,000 PED loot mentioned earlier (or for some players, a class 5 or 6 loot), he or she often posts a screenshot on the forum. Most forum users congratulate the player on their good fortune. Others, often in response to particularly large loots received by lesser known (or sometimes, well-known) players, flame the lucky player's thread or start their own threads lamenting the injustices of the loot system. They present as â&#x20AC;&#x153;proofâ&#x20AC;? their supposed miserable payout percentage and/or insane losses over some period of time. Based on our data, we can only assume one of three things: these players have a very uneconomical play style (see Tables 4 and 5); they overemphasize bad luck periods and underemphasize the good; or they have simply not cycled enough PED to have adequately sampled the entire loot distribution. As is shown in Figure 5, there is about a 20% chance that after 1,000 drops an avatar will have received less than 80% base-value payout. This corresponds to between 520 and 21,000 base-value PED expended, depending on play style. Registering complaints before cycling enough PED is not statistically warranted, but when real money is involved, it is easy to ignore statistical realities. Whatever the reality, many players claim to have left the Entropia Universe because of actual or perceived losses. Many who stay feel that the universe would be a more enjoyable place if the loot distribution had less right-tail character, while others enjoy the occasional thrill of winning tens, hundreds, or thousands of dollars in a single loot, at the expense of enduring losses on most days. Further, some express the opinion that a more consistent return would 18


Journal of Virtual Worlds Research- Payback of Mining Activities 19

reduce the number of negative reviews in cyberspace, ultimately increasing the Entropia Universe population, and with the hope that an increased population would allow the provider to increase payout percentages while maintaining profitability. Whether this viewpoint has merit is hard to say. The only thing that really matters is that, for the moment, the developers seem happy with the loot distribution and payout system. Conclusion We have collected and analyzed thousands of individual mining loots in the Entropia Universe. Using these data, a model of the loot distribution system was developed and used to predict the payout percentage and average cost to play. It was determined that the loot distribution consists of discrete classes and is heavily right-tailed. A base-value payout percentage of 91%-98% was found to be consistent with the data, and that a miner expending approximately 200 PED per hour would have a base cost to play of approximately 10 PED ($1) per hour, consistent with the provider's claims of the average cost to play. Avatars expending considerably more or less than this amount would be paying the provider proportionally more or less. We also considered the effects of play style, taxation, and player-to-player markup on returns. A player's individual play style heavily influences the personal return, with certain play styles costing significantly more than the average cost to play, and others generating profits from fellow players that exceed base-value play costs. Competing interests The authors declare that they have no competing interests. Authors' contributions Markus Falk, Daniel Besemann and James Bosson drafted the manuscript, Markus Falk also performed statistical analysis, and Daniel Besemann performed calculation of estimated real returns. All authors read and approved the final manuscript. Acknowledgements The authors thank Avatars A (steffel earthquake zermatscher), B (Noodles NightOwl Oâ&#x20AC;&#x2122;Shea), and C (Eri Ojyou Sawachika) for collecting the data used in this study.

19


Journal of Virtual Worlds Research- Payback of Mining Activities 20

Bibliography Beck M., & Zaas, W. (2008). SEE Virtual Worlds, Entropia Universe join forces to bring blockbuster movie properties to virtual world. Retrieved from http://www.seevirtualworlds.com/VirtualWorld_StudioAgreementRelease.pdf. Bell, M. W. (2008). Toward a definition of “virtual worlds.” Journal of Virtual Worlds Research, 1(1). Retrieved from http://journals.tdl.org/jvwr/article/view/283/237. Choudhury, S. (Interviewer) & Behrmann, M. (Interviewee). (2008). GC08: Entropia Universe interview with Marco Berhmann [Interview transcript]. Retrieved from http://www.mmogamer.com/09/17/2008/gc08-entropia-universe-interview-with-marcoberhmann. Kaplan, E. L., & Meier, P. (1958). Nonparametric estimation from incomplete observations. Journal of the American Statistical Association, 53, 457-481. Lehdonvirta, V. (2005). Real-money trade of virtual assets: ten different user perceptions, Proceedings of Digital Art and Culture, IT University of Copenhagen, 1-3 Dec. 2005. Retrieved from http://virtual-economy.org/files/Lehdonvirta-2005-RMT-Perceptions.pdf. Manninen, T., & Kujanpää, T. (2007). The value of virtual assets–the role of game characters in MMOGs, Int. Journal of Business Science and Applied Management, 2(1), 21-33. Mantel, N. (1966). Evaluation of survival data and two new rank order statistics arising in its consideration. Cancer Chemotherapy Reports, 50(3), 163–70. MindArk (2008a). About MindArk. Retrieved from http://www.mindark.com/company/. MindArk (2008b). Entropia Universe. Retrieved from http://www.planetcalypso.com/planetcalypso/entropia-universe/. MindArk (2008c). History. Retrieved from http://www.mindark.com/company/history/. MindArk (2008d). Entropia Universe platform business model. Retrieved from http://www.mindark.com/partners/entropia-universe-platfor/business-model/. MindArk (2008e). Entropia Universe platform. Retrieved from http://www.mindark.com/entropia-universe/. MindArk (2009). Planet Calypso. Retrieved from http://www.planetcalypso.com/home/. Yee, N. (2006). Motivations of play in online games. CyberPsychology and Behavior, 9(6), 772775.

20


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Standardization in Virtual Worlds: Prevention of False Hope and Undue Fear By Marco Otte and Johan F. Hoorn, VU University Amsterdam

Abstract New advances in science and technology always come with enthusiasm-inspiring hopes and show-stopping fears. When are such hopes and fears warranted and when are they fictitious themselves? The aim of this article is to see if we can create standards, or protocols, to measure people’s hopes and fears during online transactions and connect this to a decision support system that estimates the probability that the user’s expectations are right. User and adaptive systems could take measures to deal with the situation, by going ahead if all is clear, taking away undue fear, or downplaying false hopes. We attempt to do so by theory development through the reconciliation of technology acceptance, hope formation literature, risk perception and problem solving. We present a framework that we call Your Virtual Future, in which we describe hope and fear formation during future-oriented behavior in virtual worlds. This framework acknowledges the users’ experience and knowledge of real and virtual worlds as they are immersed in the contents as well as in the hardware. It accounts for the user’s personal capacity to accept delayed gratification and to be able to build up realistic hope. It moreover explains how users select solution paths within the affordances of the virtual world. We formulate the requirements for standards on undue-fear prevention and justifiedhope promotion in virtual worlds – in relation to contents as well as equipment. We suggest that user protocols, the human side of standardization for expectations management, are needed and that technological standards are required to generate a generic interface or shell that serves as a layer to all virtual worlds to tap a user’s state anxiety, to feedback regulating instructions, and to automatically self-adapt the system.

Keywords: hope; fear; virtual worlds; protocols; standards; requirements.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Standardization in VWs 4

Standardization in Virtual Worlds: Prevention of False Hope and Undue Fear By Marco Otte and Johan F. Hoorn, VU University Amsterdam “What is the harm of hope? Undue optimism may mislead some people to develop unrealistic expectations and suffer depression when these expectations are not met.” (Young, 1997) For some, advanced technology marks the beginning of a better future, for others it is the advent of dehumanization and robots taking control. In the early years of videoconferencing, users hoped that virtual contact would equal or even emulate physical presence but nowadays we know that mediated communication is a nice prosthesis for but not a replacement of human-tohuman contact (Jeffrey, 2005; Egido, 1988). Violent video games were seen as good for the eyehand coordination, team building, and even therapeutic tools (Griffiths, 2003) or as the materialization of evil (Bushman, 2004); vide the assumed role of the Doom III game in the Columbine High School massacre. Nowadays we know that certain groups (i.e. young boys with lower social economic status) wish to be like their violent heroes but that the larger group of gamers does not show increased aggressive behavior after playing (Konijn, Bijvank & Bushman, 2007). In other words, with the introduction of advanced technology, users are susceptible to hope and fear, which is sometimes justified and sometimes unrealistic. In addition, these examples also show that hope and fear sometimes are directed at the technological side (e.g., videoconference) and sometimes at the contents (e.g., violent games). As hope and fear are fundamental aspects of being human (Reading, 2004) that occur in almost every situation, our theory should apply to virtual worlds ranging from business to entertainment and from health to learning applications. An important assumption in this is that people take with them the hopes and fear of the real world to the virtual world and while confronted with its limitations and possibilities, take the thus adapted hopes and fears with them into the next virtual world. Our contribution is to understand how hope and fear in response to virtual worlds are formed and how to mitigate the effects when hope and fear are false, unrealistic, and undue. We will attempt to put up the requirements for standardization and user protocols in virtual worlds, which should regulate hope and fear from the side of hardware and software (technology) as well as contents. We will see that advanced technology and their contents can serve as the cause of anxiety and anticipation, can strengthen or mitigate these effects, and can be used to prevent or even cure fear and downplay false hopes. The theories that cater to the issues of hope and fear in relation to technology usually are tripartite: They discern an input (1) and a throughput (2) of (technological) information, outputting a response (3) that can be positive (hope) or negative (fear). Hope is an issue has been left relatively unexplored (Reading, 2004). Fear, on the other hand, is treated on its own (Poulton & Menzies, 2002), in the risk perception literature (Sjöberg, 2000; Reiss, 1991), phobia treatment (e.g. Klinger, et al., 2005; Krijn, Emmelkamp, Olafsson & Biemond, 2004; Lányi, Laky, Tilinger, Pataky & Simon, 2004), and attitude formation (Eagly & Shelly, 1993; Van Overwalle & Siebler, 2005). We wish to integrate these insights into a unified framework to extract requirements of standards in virtual worlds, which should help to prevent the occurrence of false hope and undue fear.

4


Journal of Virtual Worlds Research- Standardization in VWs 5

Theoretical overview Hope and fear both involve a cognitive process about a future situation that is to be achieved or avoided (Reading, 2004; Snyder, 2002; Marks, 2002; Poulton & Menzies, 2002). Hope is described by researchers as an iterative process of the perception of a probability of achieving a meaningful goal through adapting future oriented behavior (see Figure 1; Stotland, 1969; Averill, Catlin & Chon, 1990; Snyder et al., 1991; Snyder, 2002; Reading, 2004).

Figure 1. Process of hope generation and adaptation, adapted from Reading (2004, p. 19).

The primary objective of fear is the behavioral avoidance of a perceived danger (Rachman, 1977). In the case of fear we also need to make a distinction between evolutionary or non-associative fear and non-evolutionary or associative fear (Marks 2002; Poulton & Menzies, 2002). Because virtual worlds are not instinctively known as fire or snakes, the identification of associative fear is most important to our present purposes. Associative fear is a fear that needs to be learned. It takes experience, either one’s own experience or some one else’s, to recognize danger and to experience it as something fearful (Poulton & Menzies, 2002). For instance, culturally pessimistic renditions of robots taking over members of our youth that have become “game junkies” may feed prejudice against virtual worlds and virtual technology. In both hope and fear situations, then, there is a goal that needs to be reached, an envisioned future situation that does not match the current one. In other words, at the heart of establishing hope and avoiding fear, the user is engaged in problem solving (Snyder et al., 1991; Rachman, 1994; Poulton & Menzies, 2002; Reading, 2004). Perceiving a possibility of solving a problem will increase motivation and drive the future oriented behavior needed to execute the solution (Reading, 2004; Snyder et al, 1991; Snyder, 2002; cf. Ryan and Deci, 2000). On the other hand, if the solution to a problem cannot be found, initial hope can turn into fear (Reading, 2004).

5


Journal of Virtual Worlds Research- Standardization in VWs 6

Problem solving is an iterative process that looks at similar past problems and any known solutions that can be applied easily to the new problem—so-called associative problem solving (e.g., Mayer & Wittrock, 2006; Nijstad & Stroebe, 2006). If a problem falls within past experiences, this will lead to an accurate prediction and a high expectancy of the outcome of the future oriented behavior. It generates a sense of autonomy, competence, and a set of possible solution pathways (Averill et al., 1990; Snyder et al., 1991; Castranova, 2007; cf. Self Determination Theory, Ryan & Deci, 2000). For a novice to virtual worlds, the problem may be too different from past experiences so that existing solutions no longer work and more risky innovative thinking is required to assess possible new solution pathways (Jonassen, 2000; Mayer & Wittrock, 2006; Norman, 2008). The resulting risk perception is influenced by both internal factors (e.g., experience, level of optimism) and external factors (e.g., information from others, culture, technological devices). In unfamiliar circumstances, assessment of possible risks is the weighing of costs and benefits of a possible solution (Lichtenstein, Slovic, Fischhoff, Layman & Combs, 1978; cf. Reading, 2004). It is important to know that risks can only be assessed after experience is gained regarding the current or similar situation or when others supply information about the riskiness of it (Sjöberg, 2000). Risk assessment of novel situations is therefore a risky business of its own right, as falsely assessing the risks involved bears the danger of having an unrealistic risk perception and expectancy of achieving a goal. This may lead to false hope or even fear (Snyder, 2002; Rachman, 1994). In a virtual world, the artificial environment offers the user a continuous stream of new information, limiting the user’s attention to the ongoing problem solving processes (Lang, 2000; Castranova, 2007). Therefore, assessing the risks of possible solution pathways becomes more difficult. Users will become uncertain about the feasibility of solutions and this, in turn, affects the generation of hope and fear. The de facto standards for assessing user perception of technology are the Technology Acceptance Model (TAM, Davis, 1989) and the Unified Theory of Acceptance and Use of Technology (UTUAT, Venkatesh, Morris, Davis, G. & Davis, F., 2003). These models look into the perception of usefulness (Do I need this?) and perception of use (Am I able to use this?). Confirmation of the perceived usefulness and use leads to a more positive attitude towards the technology and its capabilities to help the user. Confirmation of the lack of usefulness and use leads to a much stronger and longer lasting negative attitude (Venkatesh & Speier, 1999). This is congruent with the iterative characteristics of hope, fear, and problem solving. In the UTAUT model (Venkatesh et al., 2003), four aspects are identified as significant for the generation of technology acceptance: performance expectancy (will the system do what I want?), effort expectancy (do the benefits outweigh the effort?), social influence (is the use of this system socially acceptable?), and facilitating conditions (is there help when I am stuck?). The problem with many current and new systems, whether they be hardware or software, is that marketers often claim that the system can work wonders thereby generating a high expectancy, but in reality overstate their claims. These false expectancies will lead to false hope or eventually to fear of use, regardless of whether this is realistic or not. Your Virtual Future In Figure 2, we present a framework that we call Your Virtual Future (YVF). It summarizes and attempts to integrate the processes described into the theoretical overview. With the help of this framework, we can identify at which points the user builds up hope and fear

6


Journal of Virtual Worlds Research- Standardization in VWs 7

when confronted with virtual technology. At these check points we can develop means (protocols, standards) that mitigate hope when it is false and prevent unnecessary fear.

Figure 2. Your Virtual Future.

7


Journal of Virtual Worlds Research- Standardization in VWs 8

YVF follows a double threefold structure. In the middle is the main process of hope and fear (column 2), flowing from the sub-process of hope and fear generation (row 2a), through to the problem solving stage (row 2b), and then to the resulting future oriented behavior, its results, and its evaluation (row 2c). At both sides of the central process are the external input factors (column 1) and internal input factors (column 3) which affect the central process at multiple stages and can be affected themselves through feedback loops. The main process of hope and fear starts when the user knows that she or he will enter or has entered a virtual world with a certain task in mind. At that moment the current situation (2a1) will start to deviate from the wanted situation. A good example is that the user may feel that the represented situation is not realistic enough (e.g., not lip sync, awkward biomechanics, communication barriers). This will strongly depend on external (1) and internal (3) inputs at that moment. The external inputs can be in the form of people (both in actions and in communication towards the user), instruction of use of the system, or events or objects in the current (virtual) environment. The internal inputs consist of, among other things, relevant experiences, emotions, knowledge, and cognitive capabilities. For example Second Life has an intricate monetary system that allows users to buy and sell objects, land, scripts, and convert the virtual money back to physical money. OpenSIM has no monetary system at the moment.1 Although both virtual worlds appear to be quite similar the user, the difference in affordances confronts the user with different sets of action possibilities. Because users transfer their hopes and fears from one world to the other, travelling from Second Life to OpenSIM, the latter decreases hope of making money and increases fear of loss of control due to the missing affordances of financial transactions. Of course, the user needs to detect this deviation before she or he can act upon it (2a2). The user will make a preliminary assessment of the benefits and the costs (2a3) of using the system, for example, for training purposes. This assessment leads to the generation of either hope or fear (2a4). Depending on the previous steps, this hope or fear can be unrealistic. For example, people fear that the magnetic markers of a Polhemus system send â&#x20AC;&#x2DC;bad currentsâ&#x20AC;&#x2122; into the brain. The arrows between columns 1 and 2 and 3 and 2 show the points at which both internal and external information can affect this formation process. From that moment on, the iterative process of hope and fear starts. It starts by actually acknowledging that there is a problem and precisely defining what the problem is (2b5). Once the problem is clearly defined it must be analyzed and divided into sub problems (2b6). This definition of the problem results in the setting of one or more sub-goals that need to be achieved to solve the problem. These actions lead to a mental representation of the problem(s) in a so-called problem space (2b7). The problem space has all the information needed to start working on the possible solutions. The user seeks the memory for past experiences that are the same or similar to the current problem (2b8). Previous solutions are retrieved from memory (2b9) and if none are available, the user creates new solutions based on solutions of old problems that are more remote.

1

http://opensimulator.org/wiki/Money: retrieved 15-09-2009

8


Journal of Virtual Worlds Research- Standardization in VWs 9

The generated ideas or solution paths are then viewed in the light of the current problem (2b10). The user assesses the risk (would this solution work?). Success and failure are set by weighing costs against benefits and by the user feeling competent and autonomous enough (2b11) to implement the solution. If the solution is deemed unfeasible, the process of finding solutions starts anew. Once a pathway is selected, it is connected to the problem and the current situation, stored in memory (2b12) and the solution is executed. Just as in the formation section of the process, the entire problem solving process can be affected by both external and internal input. The implementation of the solution is what is called Future Oriented Behavior (2c13) and is a characteristic of all the processes involved in hope and fear. After working a while on the selected solution pathways, the user will look for results and check these results against the progress made towards the goal (2c14). If the results concern a part of the total problem, then an intermediate assessment will be made about the results and will lead either to a positive assessment (one step closer to the goal – inspiring hope) or a negative assessment (stagnation or worse, one step further from the goal – inducing fear) (2c15). If the solution actually represents the last solution pathway planned, then the new current situation will be compared to the envisioned final goal(s) (2c16). Both the intermediate and final assessments of results will lead to a feedback that informs the user about the successfulness of the chosen solution pathways, enhanced with the emotional response to it (2c17 & 2c18). This will affect the user’s perception of agency and competence, which will in turn affect any ongoing problem solving processes and provide new or adjusted information for the next round of hope or fear generation. Again, external and internal information can influence the intermediate steps and have an effect on hope and fear. Towards requirements for the regulation of hope and fear At the points where there is a horizontal connection between external input (1) and the main process (2) or internal input (3) and the main process (2) (Figure 2), there are possibilities to influence hope and fear formation. It is at these points that the user generates some form and level of expectancy about using the virtual technology and its effects on achieving the user’s goals or not. And it is here that the management of the user’s expectations during interaction is of high importance (Boehm, 2000). A behavioral protocol or technical standard should support setting, capturing, and influencing the users’ expectations when these expectations are false. In Figure 2, these three actions can be applied at any of the connections between the main process (column 2) and the external input (column 1). Figure 3 illustrates this process of expectation management.

9


Journal of Virtual Worlds Research- Standardization in VWs 10

Figure 3. Managing expectations

A virtual-world system should provide the following features to manage user expectations and regulate hope and fear. First, it should be possible to capture the user’s set of expectations before actual use of the machine. Standard ways of doing so include running a small query or providing a customization wizard. However, an approach that is less boring to the user and more in line with the virtual experience is that of offering association games that measure attitudes (e.g., implicit Measurement through Games (iMG)),2 here towards the virtual hardware and software. Second, analysis of the user data should be done automatically and fast so that the system can provide proper feedback and influence the attitudes. This feedback should be based on the difference between the attitudes and expectations obtained from the user and the range of possibilities that the system actually has. For instance, with transparent goggles you can augment the room with virtual entities, but they do not replace reading glasses. So if hopes are high that current systems support visual correction, the system should reply “Not right now but maybe in the future.” This way, we set the user’s expectations about the current state of the system (Figure 2, point 2a1). Third, while the user interacts with the virtual world, the system should be capable of measuring the user’s hopes and fears online. To achieve this the system should be capable of detecting the temporary sub goals the user is likely going to set for task execution, estimate the users expectations about action possibilities and affordances of the available features of the system, and be able to measure the user’s affective states. Task analysis should provide a database of sub goals at each task that can possibly be performed with the system. Small choice experiments place the user for decisions that show the system whether s/he is still on the right track. For instance, the system can tell the user to get to a location in Second Life as fast as possible. If the user then starts to walk all the way to the destination, the system knows the user has no knowledge about the affordance of flying there. Meanwhile, affective states may be measured online through brain-computer interfaces (e.g., arousal), respiration, or galvanic skin 2

http://www.camera.vu.nl/research/intmethods/img.html

10


Journal of Virtual Worlds Research- Standardization in VWs 11

response. This is perhaps easier for measuring fear (serotonin level, pupil size, sweat) than for hopes. Fourth, virtual worlds should become adaptive interfaces that provide unobtrusive feedback to prevent the user from undue fear and false hopes with regard to the system. In Van Vugt et al. (in press), we found that self-similarity of embodied agents enhanced the effects of helpful or unhelpful affordances of the agent. Self-similarity increased the willingness to use an agent, provided that the agent’s advice and efficiency was good. If affordances were poor and obstructed successful task completion, users—particularly men—did not want the agent to be self-similar and preferred dissimilar agents. In using Your Virtual Future as a framework, then, avatars could be modeled after the user’s physical appearance with (Web) cameras or archived photos. Accordingly, the user’s performance is measured by the number of correctly executed computer tasks. A brain-computer interface could sample error-related negativities (ERNs), negative peaks in the EEG activity within 100 ms after performing an action, indicating that the user realizes s/he has made a mistake. When the user makes mistakes above an empirical threshold value, the humanoid interface could morph back into dissimilarity. If the error rates decrease, the face of the user is once again morphed with the face of the embodied agent. This can be done in various gradients, modeling the agent after the user’s appearance at the rate of the successful use of the application. In other words, feedback is provided without the user consciously noticing it. If the user is doing fine, high hopes about his or her performance and that of the system are rewarded by self-similarity. Disappointment and fear of failure is mitigated by dissimilarity, because mistakes are not the user’s ‘fault’ but attributed to the system, who will promise to do better next time and provides the user with suggestions for better performance. Fifth, to measure the user’s changes in hope and fear we need robust ways to send and retrieve information between the real, physical world and the virtual worlds in which the user is participating. The less intrusive the measurements are, the better it would be. The monitoring equipment, the software that controls these devices, and the software that controls the interaction with the virtual world and the virtual world itself all need to be able to communicate to make the needed flow of information possible. Towards an experimental setup To illustrate a way of testing the effects of technology on hope and fear imagine an experimental set-up that consists of two apparently similar virtual worlds in which the user will participate in buying and selling items through a virtual auction house. Both virtual worlds can be equipped with a varying set of helpful and obstructive affordances that help or hinder the user at accomplishing the predefined tasks. We expect that a virtual world with more helpful functionality will increase the user’s hopes, where as a virtual world with more obstructing functionalities increases a user’s fear. By letting the user migrate between two virtual worlds and at the same time alternating through the possible combinations of the helpful and obstructive affordances (see Table 1), the user is confronted with different situations, and thus different problems, to achieve the predefined goals. 11


Journal of Virtual Worlds Research- Standardization in VWs 12

Table 1. Overview of the possible conditions between two virtual worlds with helpful and obstructive affordances. Virtual World 1 is kept constant while Virtual World 2 systematically varies as an experimental condition.

The first moments after changing from one virtual world to the other probably will be crucial in determining hope and fear. Hope that was established in the helpful VW1 can be shattered within minutes after entering the VW2. The differences between the two virtual worlds can be created in the hardware and/or software. For example, whereas in one virtual world the user has to deal with a bare bones transaction system, the other virtual world might offer the user an appraisal system that helps in setting the best (high profit, high success rate) price. Another example could be to offer a tactile feedback system (rumble) to indicate an important change in the auctions. Or the system offers an extensive history of sales that helps the user to determine when to buy and when to sell. Taking it one step further, a user could use a stereoscopic headmounted-display device with head tracking to be able to look around in the virtual world and so get optimal access to all the relevant information displayed on multiple virtual screens, while in the other virtual world s/he has to settle for a standard wide-screen physical monitor which requires much panning around to see the same information. Conclusions The hope of reaching a future goal or avoiding a fearful situation involves an iterative process in which many internal and external influences play a role. Once hope or fear are attained by assessing the perceived feasibility of the goal in terms of benefits, costs, risks, and personal capacities, solutions must be found to overcome the discrepancy between the current and desired or current and feared situation. The problem solving leads to the future oriented behavior that is needed to achieve goals.

12


Journal of Virtual Worlds Research- Standardization in VWs 13

Problem solving leans heavily on past experiences to come up with known and trusted solutions. Currently, only a happy few has ample experience with virtual worlds, and the problems with which novices see themselves confronted may be too different from their past experiences to be easily solvable. The unfamiliarity of novices with the technology may lead to an unwarranted trust in new technology or to unrealistic fear that comes from technophobic pessimism We have attempted to set up a framework that combines all these processes into one framework: Your Virtual Future. The framework shows the main process of hope and fear formation and the possible points where internal and external influences might affect it. These are also the points where we expect it is possible to influence the users’ perceptions of the future and regulate any undue fears or false hopes. To do so, behavioral protocols and technical standards should facilitate setting, capturing, and influencing the users’ expectations. For this, a virtual world system should facilitate: 1) capturing the users’ expectations before entering the virtual world, 2) analyzing these data and providing feedback on unrealistic expectancies, 3) continuously measuring the users hope and fear by comparing the users’ decisions to predefined ones, 4) giving unobtrusive feedback to prevent or mitigate false hope and undue fears, and 5) being able to handle the flow of diverse information between the physical and virtual worlds. With the current state of technology and knowledge it should be no problem to capture, analyze, and respond to the users’ expectancies before she or he enters the virtual world. It will be interesting to see what unrealistic expectancies users have of virtual world technology and determine what lies at their basis. The same goes for the continuous measuring of a user's actions, and comparing these against a predefined set of actions. More works needs to be done in combining current BCI work with an unobtrusive feedback system to guide the user from unrealistic hopes and fears. What user reactions are usable? How high must the threshold be before feedback is rendered? How much feedback is needed to achieve the mitigation of unrealistic hope and fear? What are the effects of too little or too much feedback, if any? Finally, to make such a system generally applicable, the underlying technologies must be able to communicate with each other, something that is not easily accomplished at the moment. Initiatives to alleviate this problem are currently underway in projects such as the Metaverse1.3

3

http://www.metaverse1.org

13


Journal of Virtual Worlds Research- Standardization in VWs 14

Bibliography Averill, J. R., Catlin, G., & Chon, K. K. (1990). Rules of hope. New York: Springer-Verlag. Boehm, B. (2000). The art of expectations management. Computer, 33(1), 122-124. Bushman, B. (2004). Effects of violent video games on aggressive behavior, helping behavior, aggressive thoughts, angry feelings, and physiological arousal. Lecture Notes in Computer Science. Castranova, E. (2007). Exodus to the Virtual World: How online fun is changing reality. New York: Palgrave Macmillan. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. Eagly, A. H., & Shelly, C. (1993). The psychology of attitudes. Orlando: Harcourt Brace Jovanovich College Publishers. Egido, C. (1988). Video conferencing as a technology to support group work: a review of its failures. CSCW '88: Proceedings of the 1988 ACM conference on Computer-supported cooperative work. Griffiths, M. (2003). The therapeutic use of videogames in childhood and adolescence. Clinical Child Psychology and Psychiatry, 8, 547-554. Jeffrey, P. (2005). Videoteleconferencing: Why Is It Disadvantageous for Group Collaboration? Retrieved March 1, 2009 from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&dopt=AbstractPlus&lis t_uids=1657659321915648844related:TKtl4V4wARcJ Jonassen, D. H. (2000). Towards a design theory of problem solving. Educational Technology Research and Development, 48(4), 63-85. Klinger, E., Bouchard, S., Legeron, P., Roy, S., Lauer, F., Chemin, I., et al. (2005). Virtual reality therapy versus cognitive behavior therapy for social phobia: A preliminary controlled study. Cyberpsychology & Behavior, 8(1), 76-88. Konijn, E. A., Bijvank, M. N., & Bushman, B. J. (2007). I wish I were a warrior: The role of wishful identification in the effects of violent video games on aggression in adolescent boys. Developmental Psychology, 43(4), 1038-1044. Krijn, M., Emmelkamp, P. M. G., Olafsson, R. P., & Biemond, R. (2004). Virtual reality exposure therapy of anxiety disorders: A review. Clinical Psychology Review, 24(3), 259-281. Lang, A. (2000). The limited capacity model of mediated message processing. Journal of Communication, 50(1), 46-70. Lรกnyi, C. S., Laky, V., Tilinger, A., Pataky, I., & Simon, L. (2004). Developing multimedia software and virtual reality worlds and their use in rehabilitation and Psychology. In M. Duplaga, et al. (Eds.), Transformation of Healthcare with Information Technologies (pp. 273-284). IOS Press. Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M., & Combs, B. (1978). Judged Frequency of Lethal Events. Journal of Experimental Psychology-Human Learning and Memory, 4(6), 551-578. Marks, I. (2002). Innate and learned fears are at opposite ends of a continuum of associability. Behaviour Research and Therapy, 40(2), 165-167. Mayer, R. E., & Wittrock, M. C. (2006). Problem Solving. In P. A. Alexander & W. H. Winne (Eds.), Handbook of Educational Psychology (pp. 287-303): Routeledge.

14


Journal of Virtual Worlds Research- Standardization in VWs 15

Nijstad, B. A., & Stroebe, W. (2006). How the group affects the mind: A cognitive model of idea generation in groups. Personality and Social Psychology Review, 10(3), 186-213. Norman, K. L. (2008). Cyberpsychology: An Introduction to Human-Computer Interaction. New York: Cambridge University Press. Poulton, R., & Menzies, R. G. (2002). Fears born and bred: Toward a more inclusive theory of fear acquisition. Behaviour Research and Therapy, 40(2), 197-208. Rachman, S. (1977). Conditioning Theory of Fear-Acquisition - Critical-Examination. Behaviour Research and Therapy, 15(5), 375-387. Rachman, S. (1994). The Overprediction of Fear - a Review. Behaviour Research and Therapy, 32(7), 683-690. Reading, A. (2004). Hope and Despair: How Perceptions of the Future Shape Human Behaviour: JHU Press. Reiss, S. (1991). Expectancy Model of Fear, Anxiety, and Panic. Clinical Psychology Review, 11(2), 141153. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68-78. Sjoberg, L. (2000). Factors in risk perception. Risk Analysis, 20(1), 1-11. Snyder, C. R. (2002). Hope theory: Rainbows in the mind. Psychological Inquiry, 13(4), 249-275. Snyder, C. R., Harris, C., Anderson, J. R., Holleran, S. A., Irving, L. M., Sigmon, S. T., et al. (1991). The will and the ways - development and validation of an individual-differences measure of hope. Journal of Personality and Social Psychology, 60(4), 570-585. Stotland, E. (1969). The psychology of hope. San Fransisco: Jossey-Bass. Van Overwalle, F., & Siebler, F. (2005). A connectionist model of attitude formation and change. Personality and Social Psychology Review, 9(3), 231-274. Van Vugt, H. C., Bailenson, J. N, Hoorn, J. F., & Konijn, E. A. (in press). Facial similarity shapes user response to embodied agents. Transactions of Computer-Human Interaction (TOCHI). Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. Mis Quarterly, 27(3), 425-478. Venkatesh, V., & Speier, C. (1999). Computer technology training in the workplace: A longitudinal investigation of the effect of mood. Organizational Behavior and Human Decision Processes, 79(1), 1-28. Young, W. (1997) Fear of hope. Science, 277(5334), 1907.

15


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Machine Ethics for Gambling in the Metaverse: An â&#x20AC;&#x153;EthiCasinoâ&#x20AC;? By Anna Vartapetiance Salmasi and Lee Gillam, University of Surrey, UK

Abstract Online gambling of various kinds produces substantial financial returns but brings with it a range of challenging issues. Different countries variously allow or disallow gambling or online gambling depending on religious and legal considerations. There are then ethical considerations of risk aversion and loss aversion relating to addiction in the isolated online pursuit. Open Grid Protocols for virtual worlds, enabling interoperability amongst virtual worlds, could benefit implementers of virtual world gambling, reversing a substantial decline in turnover due to gambling being banned in one particular virtual world. In this paper, we consider the combined legal and ethical issues of gambling online and in virtual worlds, and discuss the construction and evaluation of a system with computational oversight: an ethical advisor.

Keywords: EthiCasino; machine ethics; virtual worlds; Second Life; online gambling; responsible gambling.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse 4

Machine Ethics for Gambling in the Metaverse: An “EthiCasino” By Anna Vartapetiance Salmasi and Lee Gillam, University of Surrey, UK The Second Life Grid Open Grid Protocol (SLGOGP) provides a standard for allowing avatars to move between virtual worlds (Linden Research Inc, 2008), bringing with it the potential for interoperable virtual worlds and for hybrid considerations: a mixture of public and private virtual worlds. In principle, it becomes possible to run a virtual world in the same way in which one may run a web server, and to be able to provide for areas within a virtual world with access restricted to certain members. UK-based PKR is a virtual world specifically created as a private virtual world for gambling. The much-publicized prohibition of gambling in the core of Second Life suggests that potential exists for virtual world gambling “off grid” supported by such an interoperability standard that could enable residents of Second Life to step out into a world such as PKR, in what might be considered by some as a kind of virtual underworld. However, the reputations of providers of these public virtual worlds and the designers of the protocol might be negatively impacted if they are recognized as condoning such activity. Furthermore, companies offering such private worlds may have a professional responsibility to ensure that sufficient regulatory checks are in place and that activities can take place in a safe environment, necessitating the consideration of extensions to such a standard to assure others that their professional responsibility has been fulfilled. With the scale of turnover estimated for online gambling - revenues of over US$24 billion by 2010 (CCA, 2004) - there are likely to be organizations already considering how to leverage their share of this market. This could include, in particular, organizations that were previously providing for virtual world gambling in Second Life prior to the ban. However, online gambling in general brings with it a range of challenging issues. Different countries variously allow or disallow gambling or online gambling depending on religious and legal considerations. Where it is allowed, different age restrictions may apply. There are then ethical considerations relating to harm, through knowledge of risk aversion and loss aversion, to increased risk of addiction in the isolated online pursuit. Where problems exist in the real world, virtual worlds may produce their own variations yet are bound by the laws of the jurisdiction in which they are considered to be operating. One question for the creators and maintainers of public virtual worlds is whether gambling should take place at all. For the Second Life virtual world, with their servers residing in the US, Linden Labs’ US-centric terms and conditions forced them to “comply with state and federal laws applicable to regulated online gambling” irrespective of the geographical location of the end user (Pasick, 2007; Wagner, 2007). For users of Second Life, this currently acts as a ban on gambling in that virtual world, enforced by the Federal Bureau of Investigations (FBI). This has had demonstrable impacts on the economy of that virtual world. We believe that it should be possible to construct a system with computational oversight—an ethical advisor, enabling support for different regulations and ethical viewpoints. This should provide assurance that the system not only complies with local laws, but also appreciates human values and social well-being. In this paper, we make a novel consideration of the application of machine ethics to gambling, with a focus on online gambling where individuals may act largely in an isolated context that may promote addiction, where assistance and advice may be less apparent or available (Comeau, 1997). We discuss how to design a virtual world environment based on prior literature and systems in Machine Ethics, including Truth-Teller (Ashley and McLaren, 1995), SIROCCO (McLaren, 2003), MedEthEx (Anderson, 4


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

5

Anderson and Armen, 2005) and EthEl (Anderson & Anderson, 2008) to account for legal and ethical considerations in relation to gambling. Risk profiles are constructed based on the demonstration of knowledge of gambling of end users, and these risk profiles are used as part of a monitoring mechanism – a nagware. The aim is to inform both the less knowledgeable gamblers and those whose behaviors are becoming increasingly risky and leading to the potential for harm. Only where advice is ignored should it become necessary to consider computational intervention. We expect that it would prove difficult generally to outlaw gambling in virtual worlds. An alternative would be to clarify how the ethical responsibilities are shared between both gamblers and casinos and what the expectations are on each. Responsible gambling, then, implies responsibilities on both the gamblers, in relation to their behaviors, and the casinos in relation to identifying problematic behavior and acting or intervening accordingly. However, this will not be possible unless a system can harmonize the action for both sides. We refer to this framework, as implemented, as an EthiCasino, and discuss outcomes of our research to date. This paper is an extended, revised and improved version of our previous paper (Salmasi and Gillam, 2008) presented at the IEEE Conference in Games and Virtual Worlds for Serious Applications (VS-Games). In contrast to our previous paper, here we provide a detailed background, including substantial sections regarding the legal and ethical dimensions of gambling in general and online gambling in particular, as well as the comprehensive review of related literature in machine ethics which we use to justify our approach. While the steps involved in our system remain largely similar between these two papers, additional supporting data is provided to demonstrate the variation in responses to questions - and therefore the inconsistency in understanding the risk and losses - across users. The closing discussion is also a substantial new contribution which relates strongly with the machine ethics literature and which verifies our approach. Additionally, we state the size of the market at $24bn by 2010 (CCA, 2004), fixing one of our own errors in interpretation. Background The Second Life (SL) virtual world was described by Linden Lab CEO Philip Rosedale as a land “owned, controlled and built by the people who are there” (Claburn, 2007). A currency, the Linden Dollar (L$), provides for the virtual economy by allowing limited rights to own and buy and sell digital artefacts (Linden, 2007). Rosedale’s statement suggested that the “people who are there” would be bound only by the rules and social norms of the virtual world and freed from laws of real life. According to Benjamin Duranske, author of Virtual Law, “If this is real money, there is an argument that you need to follow real law” (Sidel, 2008). On 25 July 2007, the real-world laws encroached, and due to “conflict within international laws regarding online gambling” Linden Labs announced that all gambling activities were banned. Some were happy that this would remove gambling from SL since fewer users overall would reduce the network latency of the virtual world. However, organizations invested in virtual world gambling now had to unwind their virtual world positions and presences, and some suggested that if SL were still considered a microcosm of the world, it should also include gambling (Chang, 2009). The effect on the SL economy was dramatic, with a near 50% drop in money changing hands in-world (Yahia, 2007). This led indirectly to the collapse of a virtual bank, Ginko Financial, rumoured to have been a Ponzi scheme that lost its investors upwards of $700,000. Following a series of complaints (Gardiner, 2007), Linden Lab announced:

5


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

6

We’re implementing this policy after reviewing Resident complaints, banking activities, and the law, and we’re doing it to protect our Residents and the integrity of our economy. […] Since the collapse of Ginko Financial in August 2007, Linden Lab has received complaints about several in-world “banks” defaulting on their promises.[…]As these activities grow, they become more likely to lead to destabilization of the virtual economy. At least as important, the legal and regulatory framework of these non-chartered, unregistered banks is unclear, i.e., what their duties are when they offer “interest” or “investments.”[…] Thus, as we did in the past with gambling, as of January 22, 2008 we will begin removing any virtual ATMs or other objects that facilitate the operation or facilitation of in-world “banking…” It was anticipated that Linden Lab might be able to evolve adequate technical solutions to such problems, but the importance of real-world laws was now firmly established. It was clear, however, that the economy of this virtual world had changed substantially and suddenly. The banning of gambling related purely to the location of Linden Labs and their servers, and had nothing to do with local laws relating to the location of the gambler using the software client or taking an ethical or responsible approach to gambling. It should be possible to construct a system that can robustly support legal enforcement in relation to gambling, hosted in an appropriate location and interoperable with various virtual worlds, and that provides support for wider considerations of ethical issues such as responsible gambling. Such considerations can present opportunities for the re-emergence of virtual world gambling and concomitant revenues, and could more generally provide for a less harmful approach to online gambling. Were one to be concerned about wider ethical considerations of virtual world economies, the notion of “Camping” in Second Life - where users get their avatars to sit or dance on predefined paths for a specified period of time to earn L$1 - would be one place to start. With an exchange rate around L$260 to US$1, this financial reward is highly unlikely to match the costs of the electricity used in supporting, largely, inactivity. Users are paying to support activities that are not particularly beneficial to the environment, in order that higher search ratings can be achieved by others. These users may be placing excitement about limited financial reward over and above their own financial or wider environmental concerns, or are simply lacking sufficient information to make robust decisions. The latter reason would provide particular concern in relation to gambling. Online Gambling Gambling can be defined as: ... betting or staking of something of value, with consciousness of risk and hope of gain, on the outcome of a game, a contest, or an uncertain event whose result may be determined by accident. Commercial establishments such as casinos ... may organize gambling when a portion of the money wagered by patrons can be easily acquired by participation as a favoured party in the game, by rental of space, or by withdrawing a portion of the betting pool (Gilmne, n.d.). Given hope of gain, people are likely to play for money not for fun, despite those who suggest gambling is for entertainment purposes only. By and large, the odds of losing are higher

6


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

7

than winning, and the providers will mostly benefit. Losing money in an environment where it appears possible to win money can lead to people making additional bets. The hope that further gambling will result in recouping existing expenditure is often referred to as chasing losses and unlikely to be successful due to the odds involved.. Most importantly, this is not necessarily considered a game of skill, so extensive knowledge about how to play is not always a necessary pre-condition for participation. These observations present risks of harm to the gambling individuals and, by extension, to the gambling industry, with potential for addiction at minimum. Gambling provides for a host of ethical questions when within a social environment in which others are present, but website-based online gambling changes the social dynamic by disassociating the action from both a location and from a physical co-presence. As stated by Price (2006), “internet gambling, unlike many other types of gambling activity, is a solitary activity, which makes it even more dangerous: people can gamble uninterrupted and undetected for unlimited periods of time.” Different countries have legislated for and against the gambling industry to try to reduce the risks and possibilities of harm both to the players and the society. The UK’s Gambling Act 2005 discusses limiting the number of casinos, and forcing industry to demonstrate their plans for contributions to research, for raising public awareness about the problems gambling can cause, and for helping to treat those affected (Russell, 2006). The USA approached awareness issues by introducing The National Gambling Impact Study Commission Act 1996 (NGISCA; H.R.5474) which conducted a comprehensive legal and factual study of the social and economic impacts of gambling. Some other steps for awareness have been taken by NGOs by introducing “responsible gambling”; players should be aware of the time and money that they spend on gambling plus the consequences and risks that are involved. When gambling websites are attempting to be responsible, they may produce documents containing the kinds of rhetoric presented below: • • • • • • •

We are there to help whenever you realize that you need a control over the money that you spend We can decrease the amount of money you can put into your account if you ask. You can increase it again if you feel you are in control. If you think you need a break from gambling, you can use self-exclusion tool If you suspect that you may have a gambling problem, you may seek professional help from the following links Make sure gambling does not become a problem in your life and you do not lose control of your play. Make sure that the decision to gamble is your personal choice.

For success, such statements rely on individuals who may be experiencing addiction to be aware of it, and to be in sufficient control to do something about it. The “problem” is then for the end user to deal with, and the organization has effectively absolved itself of responsibility. Gambling addiction is identified as one of the most destructive addictions which is not physically apparent - an “invisible addiction” (Comeau, 1997). Psychologists believe that online gamblers are even more prone to addiction mainly because users can play without distraction and recognition. It is unlikely, then, that self-control could be exerted in the case of online gambling. Websites such as gambleaware.co.uk give potential players and gamblers knowledge about the odds of winning, the average return to players, “house edge,” a gambling fact and 7


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

8

fiction quiz and more, to make sure that players are aware of the results of their actions in this industry. Gambleaware (n.d.) defines a responsible gambler as a person who: 1. 2. 3. 4.

Gambles for fun, not to make money or to escape problems. Knows that they are very unlikely to win in the long run. Does not try to “chase” or win back losses. Gambles with money set aside for entertainment and never uses money intended for rent, bill or food. 5. Does not borrow money to gamble. 6. Does not let gambling affect their relationships with family and friends.

Defining measures to differentiate between the healthy responsible players and addicted gamblers provides potential for controlling actions of gamblers to act to prevent addiction, but without interrogating each individual, how would it be possible to evaluate against these criteria and determine a responsible gambler from an irresponsible one? It would appear, then, that there is an opportunity for the online gambling companies, and in particular those wishing to enhance their activities in virtual worlds, to account for legislative concerns and age constraints, and also to provide assistance in a responsible gambling environment. To become an “Ethical Corporation” there are three reasons the online gambling industry should take its responsibilities seriously (Saha, 2005): 1. To clear up the industry's traditional image 2. To attract potential customers that steer clear because of this image, and 3. To comply with regulations Online Gambling Laws Online activities generally present a challenge in enforcement, with Computer Law a growing area of challenge. While virtual world gambling returns some hint of a social dynamic lost from website-based gambling, with the appearance of virtual others, legal complexity remains. With US$24 billion predicted for the online gambling market by 2010, extracting such revenues suggested a need for laws applicable to online gambling; some tackled this by making specific laws, others amended old ones. A few considerations include: • • • • •

US: The Unlawful Internet Gambling Enforcement Act 2006 (UIGEA, H.R.4411): Prohibiting financial institutions from approving transactions between U.S.-based customer accounts and offshore gambling merchants (Carlson, 2007; Humphrey, 2006). US: Internet Gambling Regulation and Enforcement Act 2007 (IGREA, H.R.2046): “Providing a provision for licensing of internet gambling facilities by the Director of the Financial Crimes enforcement network” US: Skill Game Protection Act 2007 (SGPA, H.R.2610): “Legalize internet skilled games where players’ skills are important in winning or losing games such as poker, bridge and chess” US: Internet Gambling Regulation and Tax Enforcement Act 2007 (IGRTEA, HR 2607): “Legalize internet gambling tax collection requirements” Australia: Interactive Gambling Act 2001 (IGA): Provides protection for Australian players from the harmful effects of gambling 8


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

9

UK: Gambling Act 2005 (c. 19): “it is not illegal for British residents to gamble online and it is not illegal for overseas operators to offer online gambling to British residents (though there are restrictions on advertising)”

Approaches that countries take to online gambling can be divided into three main groups: i. ii. iii.

Those who do not allow gambling e.g. Islamic countries (Lewis, 2003); Those who may allow gambling, potentially in some states, but not online e.g. USA (GAO, 2002); Those who allow gambling, e.g. UK.

A glimpse of considerations in 100 countries is shown in Table 1: Table 1: Online Gambling in 100 countries Countries and territories where online gambling is legal 1

Aland Islands

19

Dominican Republic

37

Lithuania

55

Seychelles

2

Alderney

20

Estonia

38

Luxembourg

56

Singapore

3

Antigua

21

Finland **

39

Macau

57

Slovenia

4

Argentina

22

France ***

40

Malta

58

Solomon Islands

5

Aruba

23

Germany

41

Mauritius

59

South Africa

6

Australia *

24

Gibraltar

42

Monaco

60

South Korea

7

Austria

25

Grenada

43

Myanmar

61

Spain

8

Bahamas

26

Hungary

44

Nepal

62

St. Kitts and Nevis

9

Belgium

27

Iceland

45

Netherlands Antilles

63

St. Vincent

10

Belize

28

India

46

Norfolk Island

64

Swaziland

11

Brazil

29

Ireland

47

North Korea

65

Sweden

12

Chile

30

Isle of Man

48

Norway

66

Switzerland

13

Colombia

31

Israel

49

Panama

67

Taiwan

14

Comoros

32

Italy

50

Philippines

68

Tanzania

15

Costa Rica

33

Jamaica

51

Poland

69

United Kingdom

16

Czech Republic

34

Jersey

52

Russia

70

US Virgin Islands

17

Denmark

35

Kalmykia

53

Sark

71

Vanuatu

18

Dominica

36

Latvia

54

Serbia

72

Venezuela

9


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

10

Countries where online gambling is illegal 1

Afghanistan

8

Greece

15

New Zealand

22

Taiwan

2

Algeria

9

Hong Kong

16

Nigeria

23

Thailand

3

Bahrain

10

Indonesia

17

Pakistan

24

The Bahamas

4

Brunei

11

Iran

18

Portugal

25

The Netherlands

5

China

12

Japan

19

Saudi Arabia

26

Turkey

6

Cyprus

13

Jordan

20

South Korea

27

United States

7

Dubai

14

Libya

21

Sudan

28

Vietnam

* For Australia, different regulations might apply to different states. ** Must be a Finnish resident with a Finnish bank account. *** France does not allow online gambling companies within its borders, but its citizens can gamble.

There may be arguments that users should take responsibility for choosing whether or not to gamble based on whether the laws of the country they are in at the time allows. In the online world, one would be hopeful that the online gambling website has been legitimately set up in the host country, however this is not necessarily a given. This is further complicated by individuals being able to gamble in different ways at different ages in different countries â&#x20AC;&#x201C; for example, at 18 in the UK, 20 in New Zealand, 21 in Nepal. In principle, then, an account registered by an 18year-old in the UK for a UK-based online gambling site should prevent them from gambling if they travel to New Zealand or Nepal and log in. However, in the UK a 16 year old is able to buy tickets for the National Lottery, although the website advises: â&#x20AC;&#x153;players to assume that it is unlawful to purchase a ticket whilst abroad, and to only buy their tickets whilst located in the UK or Isle of Manâ&#x20AC;? and rules have been criticized for being unclear (BBC News, 2009). The burden, here, is primarily on the user, though the technologically-savvy user may be able to make use of a virtual private network (VPN) or web proxy to avoid restrictions placed on network addresses and shift a burden back to the company. The challenge of age verification in general has been identified for online retailers in general by UK-based trade group IMRG (2009). Machine Ethics Machine ethics, generally, is concerned with defining how machines should behave towards human users and other machines, with emphasis on avoiding harm and other negative consequences of autonomous machines, or unmonitored and unmanned computer programs. Researchers in machine ethics aim towards constructing machines whose decisions and actions will honour privacy, protect civil rights and individual liberty, and further the welfare of others (Allen, Wallach and Smit, 2005). To produce ethical machines, it is necessary to understand how humans deal with ethics in decision making, and then try to construct appropriate behaviors within machines or autonomous avatars which, given continuous availability and unemotional responses, might start to replace human (ethical) advisors in a near future. Steps towards ethical

10


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

11

machines have been taken that focus on medical ethics, attempting to ensure human safety and social health. Such systems are intended towards understanding, and possibly reducing or avoiding, the potential for harm to an individual from, for example, unnecessary or incorrect medical intervention. In these systems, the final decision remains one of a human decisionmaker, informed by ethical considerations. The mainstream literature largely discusses using Case-Based Reasoning and machine learning techniques to implement systems that can mimic the responses of the researchers (Anderson, Anderson and Armen, 2005b; McLaren and Ashley, 2000). A future machine-based ethical advisor has the following anticipated advantages, many of which are familiar arguments in the development of intelligent systems: •

Always available

Unemotional

Employ mixture of ethical theories

Can explain reasoning

Capacity for simulations

Capacity for range of legal considerations

No hypothetical limits on the number of situations assessed

A synthesized overview of many of the systems reported in the literature as ethical machines is shown in Table 2. Each of them has a specific “ethical approach” and “technique” to solve the ethical dilemmas and is targeted at particular audiences and challenges for those audiences. Table 2: Evaluation of existing applications Name

Developed by

Ethos

Searing, D.

Ethical approach Moral DM

Techniques Not AI

Suitable Engineering Students

Practicalethical problems

Students, Teachers

Biomedical ethics, Right to die Killing or allowing to die

Some ethical samples Not AI

Dax Cowart

Multiple writers

Moral DM

Metanet

Guarini, M.

Particularism

Pair case (SRN), Case Motive base, Neural consequentialism network (training), Three layers

Problems in flagging

Robins, R. & Wallach, W.

Desire-intention

Not implemented

Multi-agent

Ethical area

11


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

TruthTeller

McLaren, B. M.

Casuistry

Pair case, Case-Based Reasoning,

Ethical advice

Pragmatic or hypothetical cases

HYPO

Ashley, K. D.

Legal- reasoning

Case base

Legal advice

Hypothetical cases

Casuistry

Pair case, Case-Based Reasoning, Simulating “moral imagination”

Ethical device

NSPE Code of Ethics

Hedonistic act utilitarianism

“Moral arithmetic”

Rule generalization

Prima facie duty, Casuistry

Inductivelogic programming, Learning algorithm, Reflective equilibrium

Rule generalization

W.D. Medical ethics,

InductiveHealth care logic workers programming,

Biomedical ethics

SIROCCO McLaren, B. M

Jeremy

Anderson, M.

12

Anderson, S. Armen, C. W.D.

Anderson, M. Anderson, S. Armen , C

MedEthEx Anderson, M. Anderson, S.

Casuistry Machine learning, Reflective equilibrium

Armen , C.

EthEl

Anderson, M.

Prima facie duty, Casuistry, W.D.,

Anderson, S.

Medical ethics

InductiveEldercare logic programming, Learning algorithm, Reflective equilibrium

Biomedical ethics

12


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

13

Machine Ethics for Online Gambling: EthiCasino Machine ethics has not, until now, been applied for avoidance of harm in relation to online gambling. Alongside a number of other pursuits, and because gambling has potential for addiction, it could be claimed that a system for ethical gambling may be as effective for humans and social health as medical ethics. Machine ethics may not cure addiction, but it may be able to act to reduce the likelihood of addiction. Our consideration here is how Machine Ethics may support responsible gambling and lead towards such an Ethical Corporation. We base the design of EthiCasino on prior literature and systems in Machine Ethics as shown in Table 2, including Truth-Teller, SIROCCO, MedEthEx and EthEl. Truth-Teller and SIROCCO implement case-based reasoners, comparing structured descriptions of the current scenario with previously resolved cases to support decision-making. Since each user's session is likely to have some unique characteristics, case-bases may need to be populated with large numbers of variant cases comprising different outcomes. We have been inspired in particular by three of the systems above, W.D., MedEthEx and EthEl, that have used Ross’ prima facie duties (1930), extended by Garrett (2004). Ross introduced seven “prima facie duties” as guidelines for solving ethical dilemmas but not rules without exception. If an action does not satisfy a “duty”, it is not necessarily violating a “rule”; however if a person is not practising these duties then he or she is failing in their duties. Garrett (2004) believed there to be aspects of human ethical life not covered by Ross, and extended this list with three further duties. MedEthEx uses a series of questions with a three responses, “Yes”, “No” and “Don’t know”, to decide the outcome in relation to three of Ross’ and Garrett's duties: non-injury, beneficence and freedom (autonomy). By weighting outcomes between -2 and +2, the application explains the likely impact on the patient ability to clarify the areas in which decisions will be made. EthEl takes two kinds of actions based on decisions made: (i) reminding users; (ii) notifying overseers. A system using Ross’ and Garrett’s duties for responsible gambling should consider the potential for the duties not being satisfied and act accordingly. For EthiCasino, we have addressed 5 main, often interdependent, stages involving legal and ethical considerations: Stage 1: Legal considerations Consideration of legal issues involves variations in acceptability of online gambling and associated age restrictions in 100 countries, as presented above. Here, online gambling environments in general and EthiCasino in particular can attempt to capture the geographical location (DNS lookup) of the end user, and act accordingly, but because of the capacity for technological circumvention the gambler needs to self-certify. Self-certification is required, also, for confirming the age of the end user. Should the location of the end user change over time from the original registration, the legal situation may change accordingly and location information must be captured and verified for each session. Stage 2: Knowledge of Risk Decisions related to financial risks may be taken in a number of business environments, especially in relation to stock markets and world economies. Those involved in taking such decisions are usually considered well-informed and have a number of checks and balances against which to validate their decisions or off-set their risks and/or losses. The person's knowledge is the effective tool in making the final decision. Unfortunately, because of the purported

13


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

14

“entertainment” aspect of gambling, it is less important for users to have such knowledge or to consider how to off-set risks and losses and more favorable to revenues if users are less wellinformed. To evaluate the risk behaviors of end users, we designed a questionnaire comprising 12 questions related to gambling fact and fiction and 8 related to risk and loss aversion. We offered L$10 to participants, equivalent to around 2½ hours camping, and obtained 61 responses to this questionnaire from Second Life users within a week. On average, 12.22 questions were correctly answered, with 7 and 17 as minimum and maximum. We a priori weighted questions based on our own perceptions of associated risk or negative impact on users in the absence of knowledge, leading to a division of questions into four categories:

1. Low risk: users should be able to learn quickly or lack of knowledge will not have much negative impact. e.g. Q3: “Some people are luckier than others” (fact or fiction) 2. Medium risk: users may believe in luck. e.g. Q6: “My lucky number will increase my chance of winning the lottery” (fact or fiction) 3. Medium-high risk: questions relate to calculations and predictability of results e.g. Q14: “Assume you bet $1 on the toss of a coin the chances of heads or tails are 50/50. If you win and ‘house edge’ is 10% how much you will be paid? (10c, 50c, 90c, $1)” 4. High risk: question regards perceptions of earning money and realistic facts of gambling. e.g. Q1: “Gambling is an easy way to make money” (fact or fiction) User answers and weightings led to three distinct classes of users (Figure 1). Broadly identifying these classes of user allows our system to vary its responses to gambling behaviors depending on how informed the user appears to be:

• • •

Group one: Those who may only need additional information about the games (low and medium risk questions) Group two: Those who need to be reminded about the facts (medium-high risk questions), and Group three: Those who need full monitoring and potential intervention because they are less informed and might be more prone to addiction (high risk questions)

14


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

15

Figure 1:Risk groups based on responses to questions on gambling

To evaluate these behavior profiles, we analyzed the correlations between the 20 questions for 50 users (Table 3), hoping that diversification would exist across the various responses. The resulting correlation matrix showed maximum correlation between 18 of the questions of less than 0.5 (-1/+1), suggesting that the questions themselves had a reasonable degree of independence. On this basis, the risk classification becomes the important factor since the individual questions themselves do not act as a reliable predictor for others in the same class.

15


16

Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

Table 3: Correlation matrix of collected data 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

1

1

-0.11

0.22

0.09

0.81

0.12

0.15

0.43

0.04

-0.05

0.43

-0.11

0.22

0.29

0.17

-0.25

-0.06

0.01

0.20

-0.01

2

-0.11

1

0.08

0.21

-0.11

0.29

0.26

0.16

0.04

-0.14

0.11

-0.19

-0.01

-0.14

0.19

-0.18

0.29

0.07

0.24

-0.01

3

0.22

0.08

1

0.47

0.22

0.27

0.22

0.09

0.25

0.20

0.26

0.12

0.00

0.25

-0.22

-0.32

0.18

-0.08

-0.20

0.15

4

0.09

0.21

0.47

1

0.09

-0.08

0.28

-0.16

0.39

0.08

0.12

0.07

0.09

0.30

0.15

-0.22

0.37

0.07

0.02

-0.04

5

0.81

-0.11

0.22

0.09

1

0.12

0.31

0.43

-0.08

0.10

0.43

0.12

0.22

0.29

0.17

-0.38

-0.06

-0.11

0.20

0.12

6

0.12

0.29

0.27

-0.08

0.12

1

0.05

0.34

0.18

0.38

0.22

-0.09

-0.16

0.04

-0.25

-0.26

0.17

-0.28

0.16

0.10

7

0.15

0.26

0.22

0.28

0.31

0.05

1

0.15

-0.10

0.13

0.03

0.05

0.28

0.21

0.08

-0.21

0.16

0.18

0.12

0.10

8

0.43

0.16

0.09

-0.16

0.43

0.34

0.15

1

0.17

0.10

0.43

-0.11

-0.06

0.04

0.01

-0.38

-0.06

0.01

0.05

-0.01

9

0.04

0.04

0.25

0.39

-0.08

0.18

-0.10

0.17

1

0.21

0.16

0.18

-0.12

0.11

-0.21

-0.13

0.25

0.04

-0.21

-0.13

10

-0.05

-0.14

0.20

0.08

0.10

0.38

0.13

0.10

0.21

1

0.10

0.02

-0.02

0.11

-0.25

-0.03

-0.02

-0.03

-0.07

0.13

11

0.43

0.11

0.26

0.12

0.43

0.22

0.03

0.43

0.16

0.10

1

0.22

-0.05

0.16

0.10

-0.49

-0.16

0.02

0.02

-0.04

12

-0.11

-0.19

0.12

0.07

0.12

-0.09

0.05

-0.11

0.18

0.02

0.22

1

0.01

0.18

-0.05

-0.26

0.01

-0.14

-0.20

0.10

13

0.22

-0.01

0.00

0.09

0.22

-0.16

0.28

-0.06

-0.12

-0.02

-0.05

0.01

1

0.34

0.08

-0.04

-0.04

0.11

-0.20

-0.25

14

0.29

-0.14

0.25

0.30

0.29

0.04

0.21

0.04

0.11

0.11

0.16

0.18

0.34

1

0.10

-0.21

0.16

0.28

0.08

-0.04

15

0.17

0.19

-0.22

0.15

0.17

-0.25

0.08

0.01

-0.21

-0.25

0.10

-0.05

0.08

0.10

1

-0.01

-0.04

0.03

0.25

0.01

16

-0.25

-0.18

-0.32

-0.22

-0.38

-0.26

-0.21

-0.38

-0.13

-0.03

-0.49

-0.26

-0.04

-0.21

-0.01

1

-0.14

0.18

0.03

-0.02

17

-0.06

0.29

0.18

0.37

-0.06

0.17

0.16

-0.06

0.25

-0.02

-0.16

0.01

-0.04

0.16

-0.04

-0.14

1

0.02

-0.09

-0.15

18

0.01

0.07

-0.08

0.07

-0.11

-0.28

0.18

0.01

0.04

-0.03

0.02

-0.14

0.11

0.28

0.03

0.18

0.02

1

0.12

-0.01

19

0.20

0.24

-0.20

0.02

0.20

0.16

0.12

0.05

-0.21

-0.07

0.02

-0.20

-0.20

0.08

0.25

0.03

-0.90

0.12

1

0.28

20

-0.01

-0.01

0.15

-0.04

0.12

0.10

0.10

-0.01

-0.13

0.13

-0.04

0.10

-0.25

-0.04

0.10

-0.25

-0.04

0.01

0.28

1

16


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

17

Stage 3: Boundaries for time and money For a user to stay in control - part of the main challenge of gambling - the system should allow them to opt for boundaries. Considering that each user background and experience is different, and that there is such variation across responses to 20 questions about gambling, it could be unethical to enforce boundaries without end user permissions. Users are asked to define their own boundaries both for the amount of time and the amount of money they plan to spend: these two elements are core in addiction and harm. The user's choice of boundaries is checked against their apparent riskiness. For users with profiles in Groups 1 and 2, the system will allow users to participate with limited interference; users in Group 3 will receive a moderated limit as the maximum boundary (Figure 2).

Figure 2: Maximum boundaries for each category

Stage 4: Appropriate reminders: “nagware” In EthiCasino, to minimize the potential for destructive behaviors, we adopt the idea of “nagware” A as used by a number of software providers to remind users of specific actions, e.g. that they should pay for the software they have been using. In EthiCasino, this nagware has been called VIKIB and undertakes specific responsibilities: • • •

Artificial ethical conscience: suggestions allied to risk taking and user’s circumstances, e.g. “high risk of losses, do you still what to bet?” Educational: providing access to information about each game, risks and odds associated to it, e.g. “roulette, your odds are 35 to 1” Nagging: Regularly reminding users, depending on their risk profiles, about the time and money spent, as both diminish.

17


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

18

Users receive reminders depending on how they approach their own specified limits. Those identified as having riskier behaviors will receive more reminders compared to other users. Those who have spent their money more quickly may be tempted to spend more, sometimes chasing losses. Those who manage not to make losses within the initial time period may be encouraged to continue and to make assumptions over the likelihood of larger future wins. Of course, user profiles may change over time depending on the increased or decreased risky behavior of the end user (Fig. 3).

Figure 3: Possible users' behavior

Stage 5: Boundary conditions A virtual doorman who ejects non-conforming end users is a possible future consideration. After users receive their final reminder from VIKI, they will be prevented from further gambling. The purpose here is to ensure the userâ&#x20AC;&#x2122;s own boundaries are enforced and to ensure the risky behaviors do not lead to harm. In other words, EthiCasino acts to prevent behaviors that might lead to addiction. Those continuing beyond their own time and financial limits may also be going beyond their own limits of rational behavior. A virtual doorman who ejects non-conforming end users is a possible future consideration. Discussion In this paper we have discussed the legal and ethical issues relating to online gambling of various kinds, and how the construction of Open Grid Protocols for virtual worlds enables interoperability amongst virtual worlds and between public and private systems that could provide benefits to those implementing, or in some cases returning, online gambling into virtual worlds. In particular, such considerations could reverse the substantial decline in in-world turnover seen due to gambling being banned in one particular virtual world. We have demonstrated the legal and ethical issues of gambling online and in virtual worlds, and discussed the construction and evaluation of a system with computational oversight: the EthiCasino. The EthiCasino is grounded in recent research into Machine Ethics, which offers insights into other legal and ethical matters, and provides a framework for responsible gambling in our prototype in Second Life. EthiCasino's goal is to prevent ethical and legal issues, not to resolve them. EthiCasino is a prototype systemC that implements specific ethical theories and learns about the

18


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

19

risky behavior and (lack of) knowledge of its users. It is an attempt to prevent harm through increased risk taking. The majority of existing Machine Ethics systems provides advice to help users, often medical practitioners, to make decisions that are ethically acceptable. EthiCasino takes a step forward with a testable implementation of its framework in Second Life which tries to improve not only the usersâ&#x20AC;&#x2122; decisions but also its own ethicality through different stages. While most of the ethical systems considered in this paper are either conceptual or prototype conceptual models, which have never been tested with actual users, the ethical principles behind EthiCasino have been implemented and tested to a certain extent. Excluding MedEthEx and SIROCCO, other ethical systems are unavailable, and in some instances the data and the code have been discarded. Systems such as Metanet and SIROCCO rely on subjectspecific knowledge, whereas EthiCasino tests the knowledge of the participants. Most systems in machine ethics are based on application of absolute rules; a few consider prima facie duties e.g. W.D., MedEthEx and EthEl. EthiCasino is comparable with W.D. and MedEthEx because of adoption of Ross's duties, and with EthEl because of reminders and actions. Where MedEthEx is creating a simple expert system to give ethical advice, EthiCasino is combining technologies and techniques to assure ethics throughout. While MedEthEx and EthEl concentrate on three main duties of non- injury, beneficence and freedom, EthiCasino considers a wider range of duties; in particular, EthiCasino employs 6 of Ross' 7 duties and all 3 duties defined by Garret in different stages (Table 4). Using these Prima facie duties enables the system to learn from users' behavior even if they might not match exactly the original definition of the duties. Table 4:Duties of Ross and Garret in each stage

Stage

Name

Ross's duties involved

Stage one

Legal issues

Justice, Harm prevention, Non injury, Beneficence, Self-improvement

Stage two

Ethical issues

Justice, Harm prevention, Non- injury

Stage three

Boundaries

Justice, Harm prevention, Respect of freedom, Fidelity, Gratitude

Stage four

VIKIs reminders

Non-injury, Beneficence, Self improvement, Care

Stage five

VIKIs alert

Justice, Harm prevention, Non-injury, Beneficence

EthiCasino takes certain actions to assure usersâ&#x20AC;&#x2122; safety and wellbeing by minimizing possibilities of problematic and addictive behavior, providing ethically-acceptable support, and meeting the requirements of mimicking action of human ethical advisors. This aims at ensuring fair actions for both virtual gambler and virtual casino:

19


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

20

1. Gambler: a. Clarify the possible risks of gambling online b. Choose playing hours and amount of money they wish to gamble c. Remind the users of their playing hours and the amount money they are losing 2. Casino: a. Take decisions about whether or not to let specific persons play based on their answers b. Notify the company if a user is going over their own limitation c. Log the user off if they don't take action after being reminded by the system With its substantial estimated revenues, a system such as EthiCasino may help to ensure that the ethical side of gambling remains to the fore by addressing issues relating to the impulse to gamble (Cutter and Smith, 2008). Reactive and non-intervening systems will not effectively deal with these issues because problem gamblers deny the problem. EthiCasino requires users to define their knowledge and limitations before they start, and takes actions if their self-imposed limits are being exceeded; it may not allow users who demonstrate limited knowledge of risks and losses to increase their limits. We claim that EthiCasino could create a situation where users should not worry about addiction and gambling problems and can treat their interaction as entertainment. The prototype framework of EthiCasino is relatively well-developed, and EthiCasino has been evaluated by a number of machine ethicists and experts in philosophy, computer science and business. However, a large-scale user-based evaluation is needed in order to fully explore the effectiveness of this framework. Such an evaluation currently presents a Catch-22: it is currently difficult to conceive of such an evaluation since this testing would currently entail gambling being allowable in Second Life. The move to a different virtual world, such as Open Sim, or the creation of a private virtual world may allow for such an evaluation. Successful outcomes could lead to wider considerations for business ethics and decision making.

20


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

21

Bibliography

Allen, C., and Wallach, W., and Smit, I., (2005). Why machine ethics?, IEEE Intelligent Systems, vol. 21(4), pp12-17. Anderson, M., and Anderson, S., and Armen, C., (2005). Towards machine ethics: Implementing two action-based ethical theories, Proc. AAAI 2005 Fall symposiom. Machine Ethics, AAAI Press, pp 1-7. Anderson, M., Anderson, S.L., and Armen, C., (2005b). MedEthEx: Toward a medical ethics advisor, Proc. AAAI 2005 Fall Symp. Caring Machines: AI in Elder Care, AAAI Press, pp. 9â&#x20AC;&#x201C;16. Anderson, M., and Anderson, S., (2008). EthEl: Toward a principled ethical eldercare robot, Robotic Helpers: User Interaction, Interfaces and Companions in Assistive and Therapy Robotics Workshop at the Third ACM/IEEE Human-Robot Interaction Conference. ACM/IEEE, Amsterdam, NL, pp 33-39. Ashley, K.D., And McLaren, B.M., (1995). Reasoning with reasons in case-based comparisons, First International Conference on Case-Based Reasoning (ICCBR-95), Sesimbra, Portual, pp. 133-144. BBC News. (2009) Holiday lottery rules 'not clear'. Retrieved September 2009 from: http://news.bbc.co.uk/1/hi/8190956.stm Carlson, D. (2007). Internet gambling law a success, but faces scrutiny. Retrieved Apil 2008 from The Ethics and Religious Library Commision: http://erlc.com/article/internetgambling-law-a-success-but-faces-scrutiny Chang, A. (2009), Revisiting Second Life's gambling ban, Retrieved July 2009 from: http://blog.media-freaks.com/revisiting-second-life%e2%80%99s-gambling-ban/ Christiansen Capital Advisors, LLC (CCA), (2004). Internet gambling estimates, Retrieved August 2008, from: http://www.cca-i.com/Primary Navigation/Online%20Data%20Store/internet_gambling_data.htm Claburn, T. (2007). Second Life gambling ban gets mixed reactions, Retrieved May 2008, from InformationWeek: http://www.informationweek.com/news/internet/showArticle.jhtml;jsessionid=1BW0YIPO S4ECGQSNDLRSKHSCJUNN2JVN?articleID=201201441&pgno=1&queryText=&isPrev = Comeau, S. (1997). Getting high on gambling, Retrieved July 2008 from, http://reporterarchive.mcgill.ca/Rep/r3004/gambling.html Cutter, C. and Smith, M. (2008). Gambling addiction and problem gambling; signs, symptoms and treatment. Retrieved July 2008 from helpguide.org: http://www.helpguide.org/mental/gambling_addiction.htm Gambleaware. (n.d.). Responsible gambling . Retrieved May 2008, from Gambleaware: http://gambleaware.co.uk/responsible-gambling

21


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

22

Gambling Act 2005, Retrieved May 2008, from UK Gambling Law: http://www.ukgamblinglaw.co.uk/ Gardiner, B. (2007). Bank failure in Second life leads to call for regulation, Retrieved August 2008 from: http://www.wired.com/gaming/ virtualworlds/news/2007/08/virtual_bank GAO, (2002), Internet Gambling: An overview of the issues, Retrieved March 2008 from: http://www.gao.gov/new.items/d0389.pdf Garrett, J. (2004). A Simple and usable (although incomplete) ethical theory based on the ethics of W. D. Ross. Western Kentucky University. Retrieved September 2009 from http://www.wku.edu/~jan.garrett/ethics/rossethc.htm Glimne, D. (n.d.) Gambling, Retrieved July 2008 from Encyclopaedia Britannica web site: http://www.britannica.com Humphrey, C. (2006). New online gambling funding prohibition law. Retrieved April from Gambling Law US: http://www.gambling-law-us.com/Articles-Notes/specific-pointsUIEGA.htm IMRG (2009), Age verifincation online, IMRG statement and action plan, Retrieved July 2009, from: http://www.imrg.org/8025741F0065E9B8/(httpPressReleases)/935B0A809436FE27802575 FD004E3019?OpenDocument Interactive Gambling Act 2001. Retreived March 2008 from: ComLaw: http://www.comlaw.gov.au/comlaw/Legislation/ActCompilation1.nsf/0/A9ADDC00214EC F6BCA25702600029EEB/$file/InteractGamb2001WD02.pdf Internet Gambling Regulation and Enforcement Act 2007, Retrieved April 2008 from GoveTrack.us: http://www.govtrack.us/congress/bill.xpd?bill=h110-2046 Internet Gambling Regulation and Tax Enforcement Act 2007, Retreived April 2008 from GoveTrack.us: http://www.govtrack.us/congress/bill.xpd?bill=h110-2607 Lewis, E. (2003). â&#x20AC;&#x153;Gambling and islam: Clashing and co-existing,â&#x20AC;? Retrieved May 2008 from: http://www.math.byu.edu/~jarvis/gambling/student-papers/eric-lewis.pdf Linden, Z. (2007). The Second Life economy, Retrieved April 2009, from Linden Lab official Blog: https://blogs.secondlife.com/community/features/blog/2007/08/14/the-second-lifeeconomy Linden Research, Inc. (2008). Second Life grid open gird protocol, Retrieved March 2008 from http://wiki.secondlife.com/wiki/SLGOGP_Draft_1 McLaren, M.B. (2003). Extensionally defining principles and cases in ethics: an AI model. Artificial Intelligence Journal, vol. 150, pp. 145-181, 2003 McLaren, M.B. and Ashley, K.D. (2000) Assessing Relevance with Extensionally Defined Principles and Cases, Proc. of the 17th National Conference of Artificial Intelligence, AAAI press, Austin, Texas, pp 316-322. Pasick, A. (2007). FBI checks gambling in Second Life virtual world. Retrived March 2008 from: http://www.reuters.com/article/technolog News/idUSHUN43981820070405

22


Journal of Virtual Worlds Research - Machine Ethics for Gambling in the Metaverse

23

Price, J. (2006). Gambling - Internet gambling. Retrieved May 2008 from The Ethics and Religious Library Commission: http://erlc.com/article/gambling-internet-gambling Ross, W.D. (1930). The right and the good. Oxford: Oxford University Press Russell, J. (2006). Europe: Responsible gambling: A safer bet? Retrieved May 2008 from: Ethical Corporation: http://www.ethicalcorp.com/content.asp?ContentID=4291 Salmasi, A.V. and Gillam, L. (2008). Machine Ethics for metaverse gambling: No stake in a $24m market? Proc. IEEE International Conference in Games and Virtual Worlds for Serious Applications (VS-Games) Saha, P. (2005). “Gambling with responsibilities,” Retrieved May 2008 from: http://www.ethicalcorp.com/content.asp?ContentID=3774 Sidel, R. (2008). Cheer up, Ben: Your economy is not as bad as this one, Retrieved May 2009, from The Wall Street Journal: http://online.wsj.com/article/SB120104351064608025.html?mod=todays_us_page_one Skill Game Protection Act2007, Retrieved April 2008 from Law of the Game: http://lawofthegame.blogspot.com/2007/06/hr-2610-skill-game-protection-act.html Wagner, M. (2007), Second Life casino owner left scrambling after gambling ban. Retrieved August 2008 from: http://www.informationweek.com/ news/management/showArticle.jhtml?articleID=201201449 Yahia, M. (2007), No bets for gambling in Second Life, Retrieved July 2009 from: http://www.islamonline.net/servlet/Satellite?c=Article_C&pagename=Zone-EnglishHealthScience%2FHSELayout&cid=1184649504609

A

The idea to describe this as “nagware” was introduced by Prof. Allen, Indiana University (private correspondence, 16/6/2008) B Virtual interactive Kinetic Intelligence (VIKI) is a fictional computer introduced by Isaac Asimov. She serves as a central computer for robots to provide them with a form of "consciousness" recognizable to humans C The prototype has been built on Surrey Island http://slurl.com/secondlife/Surrey%20Island/144/149/25

23


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Virtual Chironomia: Developing Non-verbal Communication Standards in Virtual Worlds By Gustav Verhulsdonck, New Mexico State University Jacquelyn Ford Morie, University of Southern California

Abstract Online virtual worlds offer new ways to explore evolving forms of social interaction, including the use of non-verbal elements used in conjunction with other communication modalities of text and voice. Ancient rhetorician Cicero coined the term “chironomia” for non-verbal communication elements that were used in a persuasive manner. Non-verbal communication is an inherently human trait and, while virtual worlds provide an immersive space for interaction, they also introduce new questions regarding standards and best communication practices within them. Because virtual worlds present a richer environment with multiple semiotic modes of interaction, they add additional channels for communication over previous textbased online modalities. In such worlds, users can select and execute non-verbal behavior in a rhetorical manner by animating their avatar thus performing in a virtual context. Therefore, communication in virtual worlds presents an intentional “speech act” in which a speaker purposefully seeks to evoke a particular response or transmit specific semantic content. As people's behavior in virtual worlds evolves and codifies, virtual worlds as a communication platform will need to develop standards based on successful user practices. In this paper we propose the need for a virtual chironomia – a standard for non-verbal elements in virtual world.

Keywords: virtual worlds; standards; non-verbal communication; computer-mediated communication (CMC); rhetoric; avatars; embodied conversational agents; deictics; proxemics; symbolic interaction; gestures. This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research - Virtual Chironomia 4

Virtual Chironomia: Developing Non-verbal Communication Standards in Virtual Worlds By Gustav Verhulsdonck, New Mexico State University Jacquelyn Ford Morie, University of Southern California Virtual worlds represent a burgeoning area for exploring new forms of social interaction, work, leisure, and play. Myriad virtual worlds are currently being implemented on various computing and mobile devices. Such worlds can be compared to early, pre-industrial societies in which artisans, scientists, and various other strata of civilization met and connected in ways that encouraged cross-fertilization (Ikegami & Hut, 2008). As principles of social commerce and creativity emerge in these environments, and as their social collateral increases, virtual worlds may increasingly be used for various business, education and entertainment purposes (Churchill et al., 2001). This new virtual public sphere presents opportunities for enhanced social interaction. In so doing, virtual worlds may “remediate” our communication practices through transformed social interaction via avatars that permit us to augment certain elements of selfpresentation through avatar-to-avatar communication and interaction (Bailenson, 2006; Bailenson & Beall, 2006; Bolter & Grusin, 2000; Meadows, 2008; Taylor, 2006; Yee et al., 2007a; Yee et al., 2007b; and Yee et al., 2007c). For instance, realizing the global potential of these shared environments, IBM is currently focusing on developing standards designed to help effectively mediate business meetings and provide ways of facilitating group communication and decision-making in virtual worlds. Because virtual worlds are already used as intercultural work environments, it is important to effectively study the use of non-verbal elements in online interactions. We argue there is a need for developing standards for online communication so that understanding is enhanced within virtual world group social dynamics afforded by avatar interaction. In conjunction with the other communication modalities of text and voice, virtual worlds such as Second Life provide inhabitants with several default gestures that may be used as a nonverbal communication elements. Yet the provided gestures are useful more for their novelty or entertainment value than as specific communication tools. A common example of this is a popular song phrase coupled with animations or a wild and crazy dance step. Yet, for virtual worlds to be used in business or professional contexts, the use of more normative and expressive gestures will become increasingly important to functional interactions as these environments evolve and more people adapt their communicative behavior to virtual worlds. The Function of Non-verbal Elements in Human Communication and Interaction Non-verbal communication is an inherent trait utilized in subtle manners during humanto-human communication and interaction. We shrug our shoulders, we raise our hand to signal that we want to ask a question, we turn our eyes to someone we want to address or to show we are paying attention to them. As humans, these types of non-verbal communication are second nature and so ingrained in our communicative behavior that we do not even think about them. Non-verbal communication complements verbal speech elements, modifies speech elements, or at times forms its own semantic unit (when, for instance, a “thumbs up” is given by someone outside of listening distance). Researchers have remarked that non-verbal communication plays an intrinsic role in human communication and interaction by mediating understanding and feedback through a variety of “back channels” such as facial expressions, eye gaze, hand and 4


Journal of Virtual Worlds Research - Virtual Chironomia 5

arm gestures, and body language. In real world contexts we emit various social cues naturally through our body language, eye gaze, facial expression, and hand and arm gestures (Sproull & Kiesler, 1992). In conversations, “facial displays and gesture add redundancy when the speech situation is noisy, give the listener cues about where in the conversation one is, and add information that is not conveyed by accompanying speech” and so provide important information about the communication context (Cassell et al., 2001, p. 6). Next to social cues, non-verbal communication may help to avoid ambiguity and provide feedback to those communicating. For example, nodding one's head and saying “uh-huh” signals understanding on behalf of the listener. The use of non-verbal communication can also facilitate “common ground” by allowing speakers and listeners to monitor and signal the extent to understanding of a communication context is being shared (Clark & Brennan, 1991). This ability to emit non-verbal elements together with speech is so embedded in our communicative abilities that sometimes it is witnessed when people gesture while talking to someone on the phone (Cassell et al., 2001; McNeill, 1992; Kendon, 1980; Manusov & Patterson, 2006). Indeed, research on non-verbal communication indicates that only 7% of a message is understood by verbal means, whereas 93% is conveyed through non-verbal means such as voice intonation and facial expression (Mehrabian, 1972). This is because while communicating, people focus more on the context of the communication and less on the semantic content, using visual cues to make inferences about the context of the communication. A difference can be introduced between the formal properties of non-verbal language (sign language), to less formalized non-verbal gestures (hand and arm gestures, interpersonal or proximal distance, and body language, facial expressions), and the more instinctual, subconscious displays of non-verbal communication (such as someone crossing their arms when they feel vulnerable). As such, while we may utilize non-verbal communication in real life to form impressions, at times we do not realize we are emitting such information and are unwittingly providing others with information about our emotional state, our attitude or our understanding of a particular context. A large percentage of our understanding in face-to-face contexts is based upon non-verbal communication. If virtual worlds are to develop into global workplaces, spaces for socializing or interacting, it will be necessary to develop a greater functionality and standards for non-verbal communication in these environments. Increasingly realistic avatars can be animated in a lively manner, conveying meta-information about the communication process, emotion, behavior and attitude in various contexts (for a good overview, see Seif El-Nasr et al., 2009). The development of new media and its affordances may also encourage novel communicative behavior in humans as they adapt and evolve their communicative abilities to such environments. Virtual Worlds as Communication Environments While virtual worlds are promising as communication environments, non-verbal elements are currently in their infancy and largely depend upon: a) the constraints and design choices of various virtual world platforms and b) the familiarity of users with the use of non-verbals in these virtual worlds. Because of this, it is important to explore avatar-based non-verbal communication functions in virtual worlds.

5


Journal of Virtual Worlds Research - Virtual Chironomia 6

Given the importance of non-verbal communication in face-to-face communication, we see a need for developing better mechanisms for non-verbal communication in virtual worlds. In contrast to face-to-face communication, virtual worlds ask us to consciously perform these interactions through our avatar, though current means to do this are neither sophisticated nor particularly effective. In virtual worlds, a broad distinction can be made between rhetorical (intentional) and non-rhetorical (unintentional) non-verbal communication behavior. While the rhetorical use of non-verbal communication involves a conscious selecting of non-verbal communication towards an effect in one's audience, non-rhetorical (unintentional) performance of an avatar is sometimes done by a less evolved understanding of the use of an avatar, a lack of understanding of a context (e.g., lack of a feedback in a timely manner), or simply by responding with one's avatar in a way that is confusing to the other person. At times what we do not do with our avatar may cause confusion (for instance, not coming closer while talking to someone), or being too close to someone (in which case, the laws of proximity dictate that the other person may feel “crowded” and will move their avatar backwards). The use of avatars, in other words, requires a better understanding of how we use non-verbal communication in such contexts. Instances which are clear in physical, face-to-face environments require an extra effort in virtual worlds by requiring someone to “perform” one's avatar and creating a different context, in which virtual embodiment has consequences for human communication and interaction (Verhulsdonck, 2007; Morie & Verhulsdonck, 2008). Using a rhetorical understanding of virtual world interactions, we propose the need for developing non-verbal communication standards (i.e., eye gaze, facial expressions, proximal distance, hand and arm gestures, and so forth) in virtual contexts. We believe non-verbal communication standards may become necessary as time spent in such worlds increases and their use expands from education to business and recreation. A standard framework for non-verbal behaviors can mitigate misinterpretations due to the idiosyncratic nature of diverse virtual worlds, platforms and affordances, and provide a shared structure for understanding. Rhetoric and Non-verbal Communication The study of rhetoric dates back to ancient Greco-Roman civilization, when rhetoricians like Aristotle and Cicero used rhetoric to teach orators how to address the assemblies in the Greek polis. The ancient discipline of rhetoric has long sought to include non-verbal communication in a system for effectively addressing groups of people through oratory. Cicero coined the term chironomia in his De Oratore (55 b.c.) for the study of non-verbal communication through hand and arm gestures that accompany speech. Besides the necessity of using non-verbal communication for communicative purposes or its visual immediacy, nonverbal communication also plays a social role in human-to-human interaction. In his analysis of social interaction, sociologist Ervin Goffman coined the term symbolic interactionism – the way we use language and symbols to negotiate our identity – to describe how our interactions are largely dependent upon performances of the self (Goffman, 1958 and1963). Goffman uses the term “facework” to indicate how our identity – the perception of others of us as well as our perception of ourselves – is negotiated through a “pattern of verbal and non-verbal acts” while interacting with others or in groups (1967, p. 5). The negotiation of one’s face rests upon assumptions about the tone of the conversation, impressions of the self, and the way we think others perceive us, to determine whether or not we have maintained “face” to others. Likewise, we see an important function for the non-verbal performance by avatars in virtual contexts as the use of non-verbal communication lets us negotiate our identity through embodiment. 6


Journal of Virtual Worlds Research - Virtual Chironomia 7

Research in embodied conversational agents (ECAs) tends to support the idea that humans strongly invest their identity in the way they may perform their avatar. Research on avatars and their usage has yielded some interesting results regarding how people behave through them, as well as the effects existing as an avatar in a virtual world has on the person behind that avatar. For example, avatars that were more responsive in mimicking their human partner's behavior were rated more highly, an effect which researchers have called the “Chameleon Effect” (Bailenson & Yee, 2005; Gratch et al., 2007). Further, inhabiting an avatar with highly regarded characteristics (such as attractiveness, tallness) has positive effects on the behaviors the inhibitor tended to exhibit in-world (termed the “Proteus Effect”) and such effects could also perhaps be carried back into the real world (Yee et al., 2007b). These studies point to the importance of avatar usage in virtual worlds: they are powerful social constructs that affect us both psychologically and physically. As virtual worlds mature, our avatars will play an increasingly important role in representing our identity to others.

Non-verbal Communication: Challenges and Opportunities for Virtual Worlds In everyday interactions, it is obvious that non-verbal communication plays an important role by providing groups of people back channel mechanisms for turn-taking, asking questions, or providing reference to objects. Non-verbal gestures such as raising a hand and turning to face someone are second nature to us in physical contexts and play an important role in grounding communication and establishing contexts. Researchers argue that speech and hand and arm gestures are intertwined and that gesturing, far from being ancillary or separate from verbal language, is actually an intrinsic part of face-to-face communicative processes that helps to decrease cognitive load by allowing speakers the ability to replace elements of speech with gestures (Cassell et al., 2001; Goldin-Meadow, 2003; Kendon, 1980; McNeill, 1992). We transmit various (conscious or subconscious) signals regarding the context of our communication through embodied cues that are interpreted by our communication partner. Researchers distinguish various kinds of non-verbal communication based on their relation to our sensorimotoric capabilities, with a distinction made between vocalic (intonation) and non-vocalic (body language) non-verbal communication. Mehrabain (1971) lists the following non-vocalic cues in common use: • • • • •

Oculesics: eye gaze, eye contact Deictics: Pointing Gesticulation: Hand and arm gesturing Proxemics: Body distance Chronemics: Time between interactions

In face-to-face contexts, many instances of unintended non-verbal communication take place, such as a subconscious display of emotion on our face or an unwanted movement of a leg/arm due to nervousness while communicating with others. In virtual worlds, there is less unintended non-verbal communication, as people must consciously animate their avatar. While Second Life provides a variety of looped “wait state” animations for avatars (so that they shift their body weight, look around, and appear to be breathing) other motions or actions must be executed through a menu choice, typing a command, or selecting a pose from one’s inventory. The available actions may not always be a good match for the desired effect.

7


Journal of Virtual Worlds Research - Virtual Chironomia 8

So, in contrast to face-to-face contexts, virtual worlds contain intentional non-verbal communication as users must purposively select and execute non-verbal behavior in a rhetorical manner when animating their avatar. The intentional/unintentional distinction is important as the use of chosen gestures affects the decoding and encoding processes that take place between speaker and listener. Encoding happens at the transmission level by the speaker, whereas decoding happens by the listener. While a speaker may encode their speech in a particular manner, a listener may fail to decode the message in a similar manner. Based on this distinction, we think non-verbal communication in virtual worlds will develop as an intentional â&#x20AC;&#x153;speech actâ&#x20AC;? in which a speaker seeks to evoke a particular response or transmit specific semantic content (Austin, 1962). A common framework for non-verbal behaviors in virtual worlds must include both rhetorical acts (actions of choice), as well as those that are procedurally driven by the utterances or the psychological state of the avatar. Such a system should exhibit real time responsiveness and a wide range of available attitudes and movements for the full complement of body and facial elements, yet it should also allow for evolutionary development. We argue that any developing standards should be open enough to allow for such evolution. They should also provide some overlap with real-world non-verbals but should not strictly emulate or mimic face-to-face interactions. An evolutionary perspective suggests that a medium affects and is affected by users adapting to its affordances and creating novel ways to communicate through them. This also means that users will bring their prior experiences with other media, such as text chat, to virtual worlds. As practices shape interaction, so do users shape the medium itself and the interactions that take place. Developing standards requires understanding why and in what context people would use gestures. This calls for a rhetorical understanding of why people use gestures to perform communication and interaction with others through an avatar and challenges virtual world developers to pay closer attention to how these gestures are used. As virtual worlds emerge as important communication environments, convincing non-verbal communication is key to their being utilized in effective ways. Conclusion Virtual worlds present us with a dilemma. As a medium of communication, virtual worlds are somewhere between text chat and face-to-face communication. While there are opportunities for embodied interaction and the feeling of sharing the same space, confusion may arise between users of virtual worlds due to a wide range varying communication affordances. As people's adaptation of virtual worlds as a communication platform will depend on their behavior, we argue that it will be important for non-verbal communication standards to evolve along with virtual world technology. While non-verbal elements such as proximity, eye gaze, and affect displays are usually unintentional (but very necessary) in face-to-face contexts, these elements, if they are to be used, have to be performed in a rhetorical manner in virtual worlds. Therefore, designers of non-verbal communication in virtual worlds are given a hard task of making the uninentional elements of communication intentional elements. Mechanisms for this are not easily designed, but as we argue, utilizing people's rhetorical understanding of communication may present one way to start developing such standards.

8


Journal of Virtual Worlds Research - Virtual Chironomia 9

Bibliography Austin, J. L. (1962). How to Do Things with Words. Oxford: Clarendon. Bailenson, J. N. (2006). Transformed social interaction in collaborative virtual environments. Digital Media: Transformations in Human Communication (P. Messaris & L. Humphreys, eds.). New York: Peter Lang, 255â&#x20AC;&#x201C;264. Bailenson, J. N. and Beall, A. C. (2006). Transformed social interaction: Exploring the digital plasticity of avatars. Avatars at work and play: Collaboration and interaction in shared virtual environments (R. Schroeder & A. Axelsson, eds.). London: Springer-Verlag, 1-16. Bailenson, J. N. and Yee, N. (2005). Digital chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological Science 16 (10): 814-819. Bolter, J.D. and Grusin, R. (2000). Remediation: Understanding new media. Cambridge: MIT Press. Cassell, J., Sullivan, J., Prevost, S., and Churchill, E. (2001). Embodied Conversational Agents. Cambridge: MIT Press. Cicero. (1948). De Oratore. Cambridge: Harvard University Press. Churchill, E., Snowdon, D., and Munro, J. (2001). Collaborative virtual environments: Digital spaces and places for CSCW: An introduction. In Collaborative Virtual Environments: Digital Places and Spaces for Interaction. London, UK, Springer, 3-17. Clark, H., & Brennan, S. (1991). Grounding in communication. In L. Resnick, J. Levine and S. Teasley (Eds.), Perspectives on Socially Shared Cognition (E. Churchill, D. Snowdon, and J. Munro, eds). Washington, DC: American Psychological Association, 127-149. Goffman, E. (1958). The Presentation of Self in Everyday Life. Edinburgh: University of Edinburgh. Goffman, E. (1963). Behavior in Public Places: Notes on the Social Organization of Gatherings. New York: The Free Press. Goffman, E. (1967). Interaction Ritual. New York: Pantheon. Goldin-Meadow, S. (2003). Hearing Gesture: How Our Hands Help Us Think. Cambridge: Harvard University Press. Gratch, J., Wang, N., Okhmatovskaia, A., Lamothe F., Morales, M., van der Werf, R.J., and Morency, L. (2007). Can virtual humans be more engaging than real ones? HumanComputer Interaction (J. Jacko, ed.). Berlin: Springer-Verlag, 286â&#x20AC;&#x201C;297. Ikegami, E. and Hut, P. (2008). Avatars are for real: Virtual communities and public spheres. Journal of Virtual World Research 1(1). Available from: http://journals.tdl.org/jvwr/article/view/288. Kendon, A. (1980). Gesticulation and speech: Two aspects of the process. The Relation Between Verbal and Non-verbal Communication (M. R. Key, ed.). The Hague: Mouton. Knapp, M. (1980). Essentials of nonverbal communication. London: Harcourt School.

9


Journal of Virtual Worlds Research - Virtual Chironomia 10

Manusov, V. and Patterson, M.L. (2006). The SAGE handbook of non-verbal communication. Thousand Oaks: Sage. Meadows, M. S. (2008). I, Avatar: The Culture and Consequences of Having a Second Life. Berkeley: New Riders Press. Mehrabian, A. (1972). Nonverbal Communication. Chicago: Aldine-Atherton. McNeill, D. (1992). Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. Morie, J. F. and Verhulsdonck, G. (2008). Body/persona/action!: Emerging non-anthropomorphic communication and interaction in virtual worlds. Proceedings of the International Conference on Advances in Computer Entertainment Technology ACE 2008 (Inakage, Masa, and A. D. Cheok, eds.). New York: ACM Press, 365-372. Available from: http://doi.acm.org/10.1145/1501750.1501837 Sproull, L. and Kiesler, S. (1992). Connections: New Ways of Working in the Networked Organization. Cambridge: MIT Press. Seif El-Nasr, M., Bishko, L., Zammitto, V., Nixon, M., Vasilakos, T., and Wei, H. (2009). Believable characters. Handbook of Digital Media in Entertainment and Arts (B. Furth, ed.). London: Springer. Taylor, T. L. (2006). Play Between Worlds: Exploring Online Game Culture. Cambridge: MIT Press. Verhulsdonck, G. (2007). Issues of designing gestures into online interactions: Implications for communicating in virtual environments. Proceedings of SIGDOC 2007: Design of Communication. New York: ACM Press, 26-33. Available from: http://doi.acm.org/10.1145/1297144.1297151. Yee, N., Bailenson, J., and Rickertsen, K. (2007a). A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. Proceedings of the 2007 SIGCHI conference on human factors in computing systems. New York: ACM Press, 1-10. Yee, N. and Bailenson, J. (2007b). The proteus effect: The effect of transformed selfrepresentation on behavior. Human Communication Research 33(3): 271-290. Yee, N., Bailenson, J., Urbanek, M., Change, F., and Merget, D. (2007c). The unbearable likeness of being digital: The persistence of nonverbal social norms in online virtual environments. CyberPsychology and Behavior 10 (1): 115-121.

10


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Another Endless November: AOL, WoW, and the Corporatization of a Niche Market Ray op'tLand, University of Calgary

Abstract The entrance of World of Warcraft (WoW) into the massively multiplayer online role-playing game (MMO) market has drastically altered conceptions of how popular a virtual world could be. Currently servicing over 12 million monthly subscribers (Woodcock, 2008), it has vastly exceeded expectations, and has brought with it more new users to persistent virtual worlds than any other product before it. However, while there has been much academic work exploring developments within the game itself (Bainbridge, 2007; Duchenault, et al., 2006; Castronova, 2007), the processes by which this explosive growth has occurred have been under-explored. The growth of World of Warcraft relative to the MMO market can only be explained via its extrinsic characteristics of the game and how these characteristics interact with processes of standardization and diversification with relative to the market as a whole. In this paper, I propose that the process that enabled WoW to rise to its current position as market leader amongst MMOs is remarkably similar to that employed by America Online (AOL) in the early 1990â&#x20AC;&#x2122;s, and that the growth of both firms are evidence of the standardizing influence that a globalizing process such as McDonaldization has when it enters a niche market. The parallels that may be drawn between these cases may be instructive in understanding the future growth of MMOs and other virtual environments. I will examine the history of the two firms to find evidence of commonalities between them. I will also outline the parallel corporatist models of McDonaldization and Disneyization as proposed by Ritzer (2000) and Bryman (2004). The process by which these firms grew to dominate their spheres will be examined in this context. I will conclude with an examination of what this growth may mean for the future of the MMO industry.

Keywords: MMORPG; AOL; World of Warcraft; WoW; Disneyization; McDonaldization. This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 4

Another Endless November: AOL, WoW, and the Corporatization of a Niche Market Ray op'tLand, University of Calgary

The market for virtual worlds has never been larger than it is right now. Driving this growth has been the proliferation of a particular kind of virtual world, the massively multiplayer online role-playing game (MMO), and the large base of subscribers that they command. Prior to 2004, MMOs were a popular gaming product, yet they still occupied a relatively small niche within the larger digital games market. What changed in 2004 was the entrance of Blizzard Entertainment's World of Warcraft (WoW) into the MMO market, and the subsequent paradigm shift that occurred within the industry. WoW currently services over 12 million monthly subscribers (Woodcock, 2008) and has brought with it more new users to persistent virtual worlds than any other product before it. While there has been much academic work exploring developments intrinsic to WoW itself (i.e. Duchenault, et al., 2006; Bainbridge, 2007; Castronova, 2007), the processes by which this explosive growth has occurred have been under-explored. As WoW does differ significantly from other MMOs with respect to the content provided, it appears that the intrinsic characteristics may be insufficient to explain its rise. The growth of World of Warcraft as the primary choice of consumers in the MMO market has parallels to the rise of other internetenabled firms, and the growth curve of WoW appears remarkably similar to that experienced by America Online in the the early 1990s. An examination of the historical situation surrounding each product will be conducted, followed by an examination of each of these services as a consumer product in order to understand how extrinsic characteristics of the service can contribute to it becoming a market leader. The parallel models of consumption of McDonaldization and Disneyization as proposed by Ritzer (1993) and Bryman (2004) respectively will form the basis for this comparison. The stages by which each firm grew to dominate their spheres will be proposed, and how in each case the rise of the new model represented a paradigm shift for the old industry. What this paradigm shift may mean for the future of the MMO industry and virtual worlds will be explored. The question is how the degree to which World of Warcraft embodied the processes of a consumerist model such as McDonaldization differentiated it from its competitors, and whether this was sufficient to create the kind of competitive advantage that could achieve this explosive growth. World of Warcraft: exodus to the virtual With the announcement of the development of World of Warcraft in 2001, Blizzard Entertainment entered a market that was dominated by a number of established firms, including corporate behemoths Sony Online Entertainment (SOE, whose titles included Everquest, & Everquest 2) and Electronic Arts (EA, Ultima Online), as well as industry veterans Mythic Entertainment (Dark Age of Camelot) and Turbine (Asheron's Call) (Woodcock, 2008). The above titles all had a fantasy theme, which was the most competitive and highly-contested segment of the MMO market. The majority of these were 3D-visualized virtual worlds, providing the user with a first- or third- person perspective. Non-fantasy alternatives did exist on the market at that time: including several which are still active to date such as Cryptic Studios' City of Heroes, SOE's Star Wars Galaxies and Eve Online, produced by Iceland's CCP ht. (Woodcock, 2008). While each of these virtual worlds was able to carve out a respectable niche, none of them came close to the market share commanded by the fantasy games (Bartle, 2003). 4


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 5

At the time of WoW’s launch, the top titles in the North American MMO market were: • Everquest (EQ): still the largest single MMO in the North American market, with a paid subscriber base slightly below 500,000 users a month (Woodcock, 2008). a fantasy game that graphical features of a 3D game to a multi-user dungeon (MUD) inspired by the DikuMUD code-base (Bartle, 2003). EQ had been the de facto standard in the MMO market since shortly after its launch, and was the benchmark for the industry. • Final Fantasy XI (FFXI): from Japanese developer Square-Enix also held a sizable userbase, at or around the half-million mark, but some of this was located within the Asian market, which was problematic for reporting purposes. FFXI had been ongoing since 2002, and utilized the traditional 3D perspective of most games in the genre • Ultima Online (UO): a slightly older game that used a top-down, isometric perspective. Its sandbox style of play was quite different from other MMOs available on the market, and it maintained a steady subscriber base of 175,000 users. • Dark Age of Camelot (DAoC): shared the 3D perspective of EQ, but provided a realm vs. realm structure for player-vs-player battles to take place. It maintained 250,000 subscribers at the time. • Everquest 2 (EQ2): SOE's sequel to their flagship game, launched just two weeks prior to the release of WoW and was growing rapidly, peaking at over 300,000 users within 2 months. While it did differentiate itself from its predecessor with new innovations, it also split the user base between two versions of the same product, which was often a recipe for disaster in the computer industry (Chapman, 2006). • Star Wars Galaxies (SWG): the only non-fantasy MMO with over 300,000 users, SWG was another SOE property, and it was currently wrestling with some post-launch gameplay changes that drastically altered how players interacted with the virtual world. When looking at subscriber figures, the Korean games Lineage and Lineage II enjoyed higher numbers than their North American counterparts (Woodcock, 2008). However, there may be alternative explanations for their rise (van Rijswijk, 2008). As Raph Koster (2006) details, direct comparisons between the North American and Asian markets are problematic due to differing metrics being used for the reporting of subscriber numbers. For this reason, the focus will remain on the subscriber figures for North America. Each new entrant to the MMO market brought a number of incremental innovations with it. The features promoted by one company to distinguish themselves from the competition tended to be adopted by its competitors (van Rijswijk, 2008). As a result, a fair degree of homogeneity existed between the various MMOs, as they asymptotically approached an idealized mean. In this regard, the MMOs were little different than other software industries (Chapman, 2006). Blizzard did not release WoW to the public until November 2004, after a long and well publicized beta-testing period during which some details were leaked to the public. The launch also happened to coincide with Black Friday - the post-Thanksgiving shopping rush in the United States - and Blizzard was able to capitalize on the eager shoppers. At this time the domestic North American market was dominated by the offerings of Sony Online Entertainment (SOE), with three of the top six games (Woodcock, 2008). Within four months of their launch date, Blizzard had weathered World of Warcraft's release and the flood of new users during the

5


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 6

holiday season. Despite a few technical difficulties such as the loot-table lock-up (client programs freezing due to massive numbers of players accessing certain files from the server at the same time) and an item duplication bug (slashdot.org, 2005), both of which were handled quickly, WoW now found itself with a user-base equal to that of Everquest (Woodcock, 2008). But while EQ had peaked at the 500,000 user plateau several years earlier, WoW's growth continued unchecked, and this growth did not initially come at the expense of existing titles in the market. EQ's subscriber numbers did not begin to decline until the following year (Woodcock, 2008). WoW's growth represented Blizzard Entertainment bringing in substantial amounts of new users to the MMO market that would not otherwise have participated in it. Other firms that accomplished a similar feat did exist, but not within the MMO sphere: other internet-enabled companies had similar growth patterns during the dot.com era of the previous decade.

America Online: dot.com poster child While the recent history of America Online (AOL) has been well-documented by authors such as Swisher (1998) and Stauffer (2000), much of the early history of the corporation has disappeared from the public consciousness. Briefly, the company began as Control Video Corporation in 1982, developing a system that would deliver games over telephone lines for the Atari 2600 videogame console. Following the collapse of the console market in 1984 (Wolf, 2008), the company spent the rest of the decade as Quantum Link Corporation providing online services for various computer manufacturers including Commodore, Apple, and Microsoft (Swisher, 1998; Stauffer, 2000). Despite turmoil in the market, AOL managed to survive by being flexible enough to adapt to any platform, allowing it to thrive where larger competitors failed by being reliant on the fortunes of a particular vendor (Chapman, 2006). The beginning of the 1990s saw the company re-brand itself as America Online, and finish its transition to the personal computer market dominated by Microsoft's DOS operating system. It was in this landscape the modern history of the company began, a landscape already dominated by university campuses, private corporate services, and established firms such as CompuServe and Prodigy (Abbate, 1999). AOL needed to adapt quickly, and soon found a niche within which it could thrive, by beginning an aggressive marketing campaign consisting of the mass-mailout of floppy disks containing the software that would connect users to AOLâ&#x20AC;&#x2122;s services (Stauffer, 2000). Unable to compete on their technical merits alone, AOL innovated in a social dimension by flooding the market with their floppy disks, using novel distribution channels to reach non-traditional computer users and create a ubiquitous brand presence. Shortly after the launch of the floppy campaign AOL began to overtake their competitors, largely due to the double-digit growth that the company maintained throughout the 1990s, which ranged anywhere from 36% to as high as 197% year-over-year (Stauffer, 2000). By 1997 AOL was buying-out its competitors outright, and the market for dial-up internet service was all but locked-up (Swisher, 2003). This growth continued until the end of the dot.com boom in 2001, just months after AOL's purchase of Time-Warner, Inc.

6


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 7

America Online and World of Warcraft: comparative characteristics Based on the previous historical accounts of their rapid growth, a number of common characteristics appear to be shard by the two firms. These include the obvious - such as the online nature of the service they provided, the firm's size relative to their respective markets, and the short time it took for market dominance to be achieved - and those not so readily apparent, such as their business models and the marketing and distribution practices that allowed them to be innovators in their markets. Both firms had a very similar business model, which can essentially be reduced to providing access to content for a low, recurring monthly charge. The nature of that content is irrelevant to the comparison. Rather, it was the delivery mechanism where they had something in common, creating value from the provision of a service on the internet. Secondly, both AOL and WoW were an order of magnitude larger than their nearest competitor in number of monthly subscribers (Stauffer, 2000; Woodcock, 2008). This is the most obvious indicator of their market dominance. Other metrics, such as percentage share of the market, number of concurrent users, or revenue per user are more problematic when comparing different MMOs (Koster, 2006). Finally, each of the firms went from a new entrant into their market to dominant leader within less than five years. A more thorough survey of explosive firm growth may indicate whether this short period is more prevalent, but at least one other example - that of RCA Victor in the 1920s (Cassidy, 2000) - suggests that this may be the case. Of the more subtle mechanisms, two similarities stand out. The first is the common price point of under $20 per month. At this price, the firms were able to position themselves as offering the consumer a high-return for the cost, with their unlimited plans comparing favourably against other leisure activities, such as concerts, movie-going or dining out. Finally, it was the non-technical innovations that each company employed, such as the use of mass-mailouts or the leveraging of existing social ties and network effects that contributed to their rapid growth, and not any technological advantage relative to their competitors. The processes by which the firms grew to prominence must be explored. Processes The majority of mechanisms which AOL and WoW share are not intrinsic to a virtual world, but rather extrinsic products relating to the service as a consumer product. Firms above a certain size with respect to their market have more in common with each other than with other firms in their market. Evaluating the products of these firms as integrated services in the modern economy requires a framework that can identify these extrinsic characteristics, and the extent to which they have been realized. To this end, George Ritzer's 1993 work The McDonaldization of Society provides such a framework. Ritzer's McDonaldization uses a Weberian framework of control to break down the dimensions by which McDonalds arose to prominence in the mid- to late-20th century. While his original work concentrated on the aspects of the â&#x20AC;&#x2DC;brick-and-mortarâ&#x20AC;&#x2122; business, little attention was paid to the online industries, as the market for them had not yet developed at the original time of publication. Ritzer did revisit his framework to look at online industries in a subsequent work (Ritzer, 2001), and he noted the unique changes that McDonaldization undergoes when 'simulated' or virtual sites are brought into the equation (p.150). However, Ritzer was focused on e-commerce sites such as Amazon.com, 'virtual malls' in cyberspace that 'dematerialized' the process of consumption - most games and MMOs were not recognized as consumption sites, so 7


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 8

the particular characteristics of virtual worlds that are displayed by MMOs went unremarked. Alan Bryman has extended Ritzer’s framework with his “Disneyization” hypothesis that addresses some of these issues (Bryman, 2004), proposing a process that works in parallel to that of Ritzer’s McDonaldization. The existence of one does not necessarily exclude the other, as both processes may work in concert within the same corporation at the same time, achieving similar goals of control through different means (p.161). While there is some overlap in the which sites of consumption are studied, particularly with respect to Ritzer's later work, the two processes are attempting to explain different things: McDonaldization's production of goods for consumption via processes of homogenization; and Disneyization's staging of those goods via heterogeneous processes (p. 158). These twin dynamics work in concert to provide a complete picture of the modern dynamics of consumption in modern society. Bryman also explains his desire to leave the door open for other large-scale processes, and it is in this spirit that Disneyization is utilized. McDonaldization 'McDonaldization' is the term that George Ritzer has used to refer to the corporatist globalizing process "by which the principles of the fast-food restaurant are coming to dominate more and more sectors of American society as well as of the rest of the world” (Ritzer, 2000, p.1). Since the first publication of the book in 1993, McDonaldization has at times been synonymous with the process of globalization, and has often become reified and used as a term of derision without necessarily understanding the root explanations behind it. McDonaldization is fundamentally a Weberian approach to the social aspects of globalization, adapting and extending Weber's core concepts to account for the changes of the post-modern age. For Ritzer, it is the process by which McDonaldization takes place that is important, and this occurs along four main dimensions: efficiency, calculability, predictability, and control (through non-human technology). The extent to which any given firm may be said to be "McDonaldized" is determined by how much they are optimized along these dimensions (p.19), but McDonalds is by no means the end-point in any of these dimensions. While McDonalds is still the king within the fast food marketplace despite competitors such as Subway, modern exemplars have taken the process to new extremes in other markets. Wal-Mart, the number one retailer in the world (Wiley, 2009), is the most obvious example of a modern McDonaldized corporation (Ritzer, 2001, p. 131). At its core, McDonaldization's process is one of rationalization. The four dimensions of this process are applied in order to homogenize a firm’s business methods in a standardized manner. These dimensions are contextual: the values that one may attach to a concept like predictability may be different depending on whether one is a customer, an employee, or with management. But there are also trade-offs that occur during the process: when one optimizes for any one dimension (such as efficiency), one often must forgo other concerns. Thus Herbert Simon's concept of "satisficing" often is used in conjunction with elements of McDonaldization. The dimension of efficiency is the optimum way to achieve a goal. As Ritzer outlines, this is a pathing solution, the way to best get from point A to point B (Ritzer, 2000, p. 12). But this is state-based pathing, and not necessarily geographically related: the customer going from states "hungry" to "full" in the optimum way is an example Ritzer gives on how McDonald's excels (p.12). One of the chief ways streamlining occurs is via a Taylorist model of scientific management, whereby processes are broken down into their component parts, in order to "choose the optimum means to a given end" for each part of the process (p.42). Simplification of the 8


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 9

product occurs, by cutting down the number of options available, or by limiting the choices available with which to interact with the product (p. 55). Efficiency can also be gained by offloading the labour to the customers where the potential exists, citing self-service of all manner of varieties as an example (p. 58). Calculability is the "emphasis on the quantitative aspects of the products sold and services offeredâ&#x20AC;? (p.12). It is the implementation of the Stalinist aphorism: "quantity has a quality all its own", and can be evident not just in the amount of product that a consumer receives, but also the time and costs associated with the good (p.13). Calculability occurs in two dimensions: that of the speed of the process, or in the quantity of the output (p. 62), and Ritzer notes that increasing either of these tends to inversely affect quality. Aesthetic concerns become a secondary factor, when calculability is optimized (p. 71). Predictability is the assurance that a good will be the same irrespective of time and place. A customer ordering a Big-Mac has a very good idea of what they're going to get (p.13). By creating a predictable setting via systemic standardization (p. 84), predictability allows an aspect of trust to develop between the consumer and firm. By scripting the interactions that occur with customers, the responses that employees are allowed to have, and the products that should result from the process, the customer, firm and employee all can mitigate risk by dissociating themselves from the "process". The 'system' becomes externalized. Control is the last dimension of Ritzer's model, and the one where his Weberian influences show through most. Control is achieved via the implementation of non-human Technology (p. 14). This is the return to the Taylorist ideals mentioned above, and it works part and parcel with all three previous dimensions. Technological means are used to control the customers, the process, and the product, and those things that can't directly be controlled are minimized as much as possible (p. 110). The main process that can't be controlled is speech, and Ritzer covers some of the technologies released since 1993 that may be able to mitigate even that in his later works (Ritzer, 2001). Ultimately, control is a rationalizing process that allows those in management to claim helplessness to the situation, and both workers and customers to submit to the control in order to obtain the benefits they may enjoy from the other three dimensions. This rationalizing process is the goal of any McDonaldized system. Rationalization is a homogenizing force (Bryman, 2004), leveling distinctions in and between the various dimensions. Disneyization Alan Bryman's Disneyization (2004) is a derivative of Ritzer's model, and it outlines a systemic process that works in parallel to Ritzer's own. The need for the parallel model was brought about to explain some of the "changes that are occurring in the service and consumption spheres of modern society (p. 158). Disneyization becomes necessary, as McDonaldization "sits uneasily in an increasingly post-Fordist era of choice," (p. 159). Disney stakes its claim in the service sphere of modern society that McDonald's doesn't address. Much like Ritzer, Bryman's process also occurs across four dimensions: theming, hybrid consumption, merchandising, and performative labour. Unlike Ritzer, these dimensions are not as categorically distinct, with some overlap between them. Bryman also abandons Ritzer's concentration on ensuring the dimensions or the process apply to all actors in the system equally, and instead focuses on a narrow set of participants involved in the process.

9


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 10

The first dimension, theming, "consists of the application of a narrative to institutions or locations" that are typically sourced "external to the institution or object to which it is applied." (Bryman, 2004, p. 15) As Bryman notes, the evidence of theming occurring within Disneyland is quite obvious: how theming is actually occurring requires a little more detail. Bryman builds his evidence for the sources of theming from the typologies of two works, that of Gottdiener (2001) and Schmitt & Simonson (1997). Gottdiener had a list of nine "themes" divided amongst geographical locations and social constructions. The Wild West, Arabian fantasy, and tropical paradises coexist with nostalgia, class, and modernism. Bryman is critical of Gottdiener for a number of exclusions from the list (p. 17), and he contrasts this typology with that of Schmitt and Simonson and their list of 'cultural domains' that provides inspiration for themes. Bryman uses these domains to synthesize a list of twelve thematic sources, including Place, Time, Sport, Music, Cinema, Literature, and the Natural World (p. 18). Bryman also includes the company brand and logo as a theme, and while I will not deny these can be prominent in a given Disneyized product, they are more relevant to the other dimensions of merchandising and hybrid consumption that follow. Hybrid consumption is the blending of different modes of consumption by tying them to the environment within which these consumption activities take place (p. 57). It is built on a destination model of retail design, wherein the customer is enticed to an environment and allowed to linger in the presumption that the longer they stay, the more likely they are to break down and begin purchasing. As such, it is strongly linked to Bryman's third dimension of merchandising (p. 81), as an attempt to capitalize on synergistic effects of the marketplace by horizontally extending their potential revenue streams is undertaken. This dimension of merchandising is 'the promotion of goods in the form of or bearing copyright images and logos' (p. 79), and is a specialized form of the generalized franchising process, a distinction that would appear to be more relevant to Ritzer's McDonaldization than here. For Bryman, merchandising is a means by which a firm may maximize the revenue from any one particular piece of intellectual property. Performative labour is Bryman's final category, and its label may cause some confusion, particularly to those familiar with works of J. L. Austin or Judith Butler. What Bryman means when speaking of 'performative labour' is really Hachschild's concept of 'emotional labour' (1983), in which the employee in an organization "needs to convey emotions and preferably appear as though those associations are deeply held" during the general course of their work for the firm (Bryman, 2004, p. 104). Bryman spends some time detailing the various ways in which this emotional labour may be conducted in modern consumer settings, including the hospitality industry, modern retailers, airlines, and even McDonald's itself. Bryman notes Hachschild's belief that emotional labour is a net negative for both the workers and the customers, but Bryman considers it irrelevant to the larger process at hand.

10


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 11

Critique of Disneyization While at first glance the model of Disneyization that Bryman proposes appears to provide a solid framework by which to analyze a prospective corporation or brand, it lacks several of the qualities that Ritzer's original formulation provided, including depth, generalizability, and breadth or completeness. With respect to depth, where Ritzer's work looked at the effect of his process on all actor's involved in the system - including the perspectives of the consumer, the worker, and management - this multiplicity of viewpoints is largely absent from Bryman's work. While Bryman provides examples where each dimension may take place, this is indicative of the breadth of the industries across which the process of Disneyization is deployed, as opposed to how deeply any single given industry may be Disney-ized. The second missing quality is generalizability. The dimensions given by Bryman are narrowly defined and tightly constrained, to the degree that the utility of his model is limited to explaining the particular thing (Disney) that Bryman wants to explain, and little else. Additional work is required to employ Disneyization as a model outside of its original context. Disneyization's final missing quality is completeness, and this is most noticeable in the categories of hybrid consumption and performative labour, where the dimension that is defined by Bryman is quite different than what the label would suggest. In the first case, what is presented as 'hybrid' appears to be simply 'geographically bound' and 'horizontally integrated', neither of which are particularly novel. In the case of performativity, a switch occurs as the reader is presented instead with 'emotional labour', which while important, is only a small component of the larger concept of performativity and not representative of the theory as a whole. The strength of Bryman's model is that he still managed to capture what was occurring within Disneyland. Both consumption and performativity are dimensions that should be part of the process of Disneyization; it is just the particular definitions used which simply need to be more inclusive and generalized, and this may be accomplished by engaging the relevant literature underlying the assumptions made for both these dimensions. Extending Disneyization The best way to reconcile Bryman's process and bring depth and generalizability to Disneyization would be to reject the narrowly defined notions of 'hybrid consumption' and 'performative labour' and fully embrace the larger literature of both consumption and performativity. For the first of these, Holt's typology of consumption practices (1995) provides a useful model. Dividing consumption practices by both the purpose of the consumption and the structure in which it occurs, Holt allows for various consumptive activities to be situated relative to one another in a typology, wherein Bryman's 'hybrid consumption' is an instance of Holt's 'consuming as experience'. Metaphoric and symbolic associations are utilized during the consumptive act to allow for sense-making to be conducted by the consumer. Holt's other forms of consumption - integration, play, and classification - can extend 'hybrid consumption' beyond the narrow definition that Bryman provided. With respect to performativity, Loxley (2007) builds on the work of Austin, Butler, and Goffman, and notes that what is considered "performative [...] appears to focus a valuable but not too difficult idea, detachable from the circumstances of its formulation without significant loss and usefully applicable to a wide range of differing intellectual challenges or problems" (p. 2). It is for this reason that the broad definition of performative is being appended to Bryman's 11


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 12

restricted one. Citing Austin, Loxley notes that performative means that "words do something in the world" where "linguistic acts don't simply reflect a world but that speech actually has the power to make a world" (p. 2, emphasis in original). Loxley also notes the ties performativity has with Goffman's work, in which roles "form the basis for the ways in which we navigate round our social world" (p. 151). Evaluating AOL An analysis of AOL reveals how it best exemplifies Ritzer's concept of efficiency in the relative simplicity of the interface provided, with a limited selection of choices available to the user upon launching the service, presented with large, easily delineated 'buttons'. The use of experienced users as 'guides' (Stauffer, 2000) provided the new user with a point of contact when faced with the unfamiliar software. The AOL software was also very predictable from the standpoint of the customer. Outside of major version changes, the user was presented with the same layout and experience each time the program was started. The only unpredictable component of using AOL was the connectivity issues that plagued them early in the company's history when demand exceeded capacity and the firm was struggling to keep up with explosive growth (Stauffer, 2000). Though the user may find something 'unique' online during any given session using software, this does not count against the predictability of the software and service itself. The calculability of AOL can be seen in the flat-rate monthly fee for the service adopted in 1996 due to market pressure from competitors. The customer was able to know ahead of time how much the service would cost, and earlier per-minute and per-hour charges that made budgeting an issue were no longer required. Moreover, apart from localized versions, the service was the same regardless of where geographically the service was used. Aspects of control via technology that manifested in AOL include control usable by the customer, such as privacy settings, parental block lists to safeguard children's surfing, and e-mail filtering programs. Control was also a tool available to AOL as a corporation, including IP address logging, usagemonitoring, and restrictions on the availability of certain newsgroups (Stauffer, 2000) and it was the information that AOL was available to gather by data-mining their customer usage statistics that was one of the factors leading to its high stock valuation (Swisher, 2003). AOL minimized the number of options available to their customers, but they still existed to some degree. Theming was relatively minimal, aside from look and feel distinctions between different areas of the software. The most prevalent aspect of theme was that of the AOL brand itself, which Bryman considers its own separate category. Given how inextricably linked this is with consumption and merchandising efforts, it is best to deal with all three at once. Hybrid consumption existed primarily though treating AOL as a virtual 'site' or destination (Benedikt, 1991) through which other consumptive activities could take place, such as shopping and accessing media content. Merchandising was available, but these branded products tended towards 'logoware' and other ancillary products. The most common AOL merchandise was the installation floppies and CDs that the company used for promotional purposes (Stauffer, 2000). Performative labour occurred in the customer service area, but it was the expanded notion of performativity that was available in abundance, and the AOL customer could exercise a wide variety of options to construct an online identity (Turkel, 1995), from the choice of username, to participation in official online virtual realms such as Neverwinter Nights, or AOL's unlimited access to Usenet and its own internal forums (Raymond, 1996).

12


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 13

Evaluating WoW When applying Ritzer's and Bryman's processes to World of Warcraft, we see a quite different pattern emerge. This is due in part to the differences of MMOs relative to the ecommerce sites that Ritzer (2001, p. 150) and Bryman (2004) focused on, and partly due to the differences between WoW and other titles in the MMO market. Beginning with Ritzer's dimension of efficiency, there are a number of intentional design choices which contributed to the streamlined feel of the game. These include the use of a single dual-use disc for distribution, servicing both the PC and Mac user-base. By 2004, the Macintosh represented a sizable fraction of the computer market, especially amongst home users and college students. The low system requirements relative to its direct competitors - where the trend was to larger, more graphicsintensive games, as seen with its contemporary EQ2 - allowed Blizzard to target the largest potential market. Ritzer's notion of limiting choices comes up again in the race and class selection as the 8 races and 9 classes available at launch were significantly less than their nearest competitors. (EQ2 had launched 2 weeks prior with 14 races and 24 classes). Ritzer's notion of "putting customer's to work" (Ritzer, 2000, p.44) didn't involve busing tables, but can be seen in 2 ways: foremost among these is the guide program, in which volunteers provide low-level service jobs such as tier 1 tech support for the MMO company. Additional forms of labour can be seen in guild system itself (Duchenault, et. al., 2006; Silverman and Simon 2009). Blizzard offloads a large amount of human resource management as guilds self-organize to the aim of completing the content that is provided by the developers. Any attempt to quantify the realworld value of this virtual labour can be problematic at best for a variety of reasons (Castronova, 2007). WoW is also McDonaldized in Ritzer's dimension of calculability. Chief of these is the extrinsic variable of the fixed monthly cost. A player of WoW is aware up-front of exactly how much usage of the service they have available. In this respect, WoW is no different from the other major MMO titles at the time of its release. Intrinsically, the game is quite explicit about the information available to the user. In-game items provide DPS (damage-per-second) numbers for every weapon in the game, in-game quests are undertaken with full knowledge of the reward available for completion, and integrated quest-tools created by third-party publishers can trivialize their undertaking. WoW also diverged from other 3D games by providing a real-time in-game mini-map, previously seen in isometric games such as UO, and floating icons representing the status of NPC agents relative to the user. This in-game knowledge reporting represented a significant shift from the trend in the industry since its inception - where much of the detail of the game was obfuscated from the user, whether for reasons of verisimilitude or competitive balance. The net effect of this hidden information was to increase the learning curve required to play the game, and this time investment made it unattractive to large segments of the digital game market. There is one manner in which WoW directly contradicts Ritzer's calculability: aesthetics are not taken for granted within the world of Azeroth. There is a very strong and consistent style that carries over in all aspects of the game, from the user interface to the in-game artwork. Aesthetics are such a strong component of the WoW's experience that the degree to which WoW is McDonaldized in this dimension is minimized. The dimension of predictability is also manifest within WoW. The extent to which a player is able to optimize for certain occurrences within the game, and make allowances for small degrees of random variation speaks to the high degree of predictability in-game. One of Ritzer's key components of predictability was the presence of scripted interactions, and while this is used heavily within WoW, it is also true of the entirety of the MMO market. Automated

13


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 14

agents within the game are the primary way story, plot and quest information is distributed to the customers. These non-player characters (NPCs) give out quests, provide information, clues, and rewards, and provide essential in-game services, such as travel, food, item repair, and skill advancement. All are entirely automated, all act without the need for human oversight (beyond the original programming), and all are beyond the ability of the players to deviate from the script. (Blizzard allowing NPCs to be targeted and slain within the world PvP that exists in WoW is one of the few ways that one can go outside the script, and even this is achievable only through a narrowly constrained set of conditions). The automation allows for the MMO genre to achieve in the virtual a level of control over their agents scripted interaction that far exceeds anything achievable in reality by a McDonaldized company. As Ritzer notes: "In fact, such sites [cyberspaces] represent something approaching the ultimate in McDonaldization" (2001, p.149). This 'ultimate McDonaldization" is realized in the degree of control that manifests in the scripted interaction that players have within the game itself. Foremost of these is the reporting function within the game, by which a player can report objectionable language, behaviour, or conduct. Blizzard again offloads the labour to the customer, this time the labour of social control, turning the entirety of WoW into a surveillance society where non-normative actions may be rationalized, or excised completely (Silverman & Simon, 2009). Here again, WoW does not diverge significantly from the rest of the MMO market. Turning now to the modified dimensions of Bryman's Disneyization, the dimension of theming is available everywhere within WoW. While WoW has followed the lead of Asheron's Call in moving to a seamless virtual world uninterrupted by zoning or loading screens (save for during specialized fast-travel or personalized pocket dungeons or 'instances'), the visual and auditory distinctiveness of each particular in-game area contributes to the strongly-themed feel that exists throughout the game. As with the control dimension above, WoW is not being singled out from the MMO market for special attention here, as a similar list could be provided for every major MMO. There is a high degree of similarity in themes that occur across various MMOs in their implementation of thematic elements. Themes such as the tropical paradise, the Wild West, the encroachment of technology, deserts or Arabian Nights, and elemental realms of fire, ice, and water are obvious in most of these games. Why the prevalence of themes in MMOs? Bryman notes that "theming provides a veneer of meaning and symbolism to the objects to which it is applied." In a world in where all interaction is symbolic, theming provides a method of interaction that is easily knowable to the consumer (Minsky, 1984). It also allows the designers of a virtual world a shortcut to providing the massive amounts of content required to occupy and retain their customers. In effect, theming becomes 'content for free'. This does not mean to suggest that no work is done to create it; it simply implies that much of the heavy lifting can be diffused by relying on a well-known theme. Examining WoW with the expanded dimension of consumption reveals how multifaceted the dynamic of this process within WoW really is. There is an intrinsic/extrinsic split in the location where the consumption occurs. Intrinsic consumption is that which occurs within the game, and this is further split between that of virtual goods and the game 'content' itself. Extrinsic consumption is that of goods that are external to the virtual world of the game. One of the manners this occurs is merchandising, which Bryman treats as a separate category, below. The intrinsic act of consumption is a heavily-integrated component within WoW. NPC scriptedagent sales-avatars and the mercantile goods they represent are everywhere. A further part of this consumption is the heavily-stylized animations that occur with the avatars use of consumable items within the game, including the use of virtual food and drink, and the drinking of potions to achieve various in-game benefits. Within WoW, consumption is more than desirable, it is necessary. 14


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 15

One of the game's major innovations in the consumption dimension relative to the MMO market was the auction house. This was a point of divergence from the 'bazaar' model that was prevalent in MMOs at the time, as typified by the vendors of Ultima Online and the Bazaar zone of the original Everquest. The formal integration of in-game tools to facilitate trade arose from the social innovations that the players of those games undertook to enable trade in the game, providing an example of user-led innovation (Von Hippel, 1987), including marked runes in UO and the 'neutral ground' of the Commonlands tunnel in EQ. WoW changed the dynamic of how player's interacted with virtual goods and each other within the game (Bartle, 2004), by incorporating an eBay-like dynamic with short-term lengths and buyout pricing which served to entice participants to check back regularly. Bryman's dimension of merchandising is closely linked with consumption, but prevalent nonetheless. It is fairly obvious: there exists a wide variety of branded goods, and Blizzard has been Lucas-like in their amount of WoW branded products available. Goods and services exist to either facilitate game-play (i.e. the subscription to Blizzard to provide the service), or facilitate interaction with other players (e.g. tickets to Blizzcon, or additional consumer products). Hardcover atlases and art books, licensed fiction and comic books, computer peripherals, action figures, and more all exist for the WoW devotee. There has also been horizontal integration of the WoW brand within the gaming marketplace, with WoW-themed board game, an adaptation of WoW using the Open Gaming License (OGL) (Wizards of the Coast, 2004) of the 3rd edition of Dungeons and Dragons, and a collectible trading-card game (CCG) that featured cards that could be redeemed for in-game tokens and rewards, such as mounts and cosmetic goods, showing a level of synergy between the virtual and material worlds that had not existed prior. Unlicensed, third-party products can benefit from an association with the WoW brand as well. Websites discussing WoW can see a benefit from increased traffic, and other third-party services include commissioned character portrait work, game guides and atlases, and customized plastic figurines. While SOE had also pursued extrinsic licenses products and hosted the Fan Faire convention, the scale to which Blizzard leverage WoW as a brand was unseen in the MMO market. The final dimension is that of performative labour. With the expanded definition of this category, performativity occurs in two ways. Bryman would likely concentrate on the work that is conducted by the customer service employees and guides within the game itself. Indeed, this kind of labour has been present within the MMO field since its inception, from the appearance of the operators of early MUDs as 'Wizards' within the game, to the guide programs, volunteer or paid, conducted by most modern MMOs (Bartle, 2003). However, WoW is hardly the exemplar within the field in this area. That distinction would likely go to the recently closed Matrix Online (MxO), which had paid actors using avatars in game to lead events and story-lines. This practice was discontinued relatively early on, and by the closing of MxO, the role was filled entirely by volunteers. MMOs expose the how shallow performative labour was as a concept in Bryman's original proposal. He made no allowances for the performativity that is conducted by the customers themselves (Turkel, 1995). As a RPG, WoW provides the customers a wide latitude in constructing an identity for themselves. The roles that a given player may engage in vary depending on context, whether it is the roles dictated by game logic within a small group or raid, the roles that may be imposed socially, by guild, faction, group, or extrinsic factors (friends, family, gender, etc.), or the roles that one may undertake of their own volition in both intrinsic and extrinsic situations (Williams, et.al., 2006). 15


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 16

Parallel Processes: Homogenization v. Heterogeneity Pictures a 2x2 matrix: one side is marked with the processes of McDonaldization and Disneyization, operating in parallel in different market segments which are marked on the other axis. McDonaldization represents a process of increasing homogenization of a given firm with respect to its market; Disneyization represents increasing heterogeneity of the product. In the real-world sector, the namesakes of each process stand as representative of their category. As Bryman (2004) has noted, the boundary between the two processes is porous: both McDonald's and Disney has elements which can be best explained by processes in the other stream. However, the overall trend for each firm is to be in the stream named after them. Given this, we can look to some modern exemplars of the processes: those firms that have taken the original processes to new heights, surpassing the namesakes in some (or all) of the dimensions. WalMart has taken the rationalization processes of McDonald's as far as allowable in the retail sector. (Fishman, 2006; Hicks, 2007). Similarly, Las Vegas is the modern expression of Disney; a playground for adults who grew up with Disneyland as a vacation destination. Bryman (2004) notes the ways it which the modern casinos are emblematic of the Disneyization process: theming occurs in each different casino complex; hybrid consumption occurs everywhere (indeed, the entirety of the strip is one giant zone of consumption); merchandise is also available everywhere, and all of the 'front-facing' labour in the entertainment complex is performative. But Ritzer (2001) notes how casinos operate in a McDonaldized framework at the same time, suggesting that Las Vegas lies closer to that porous boundary between the two streams. On the horizon, Dubai could also stake a claim to being a modern exemplar of the Disneyization process. And while both Wal-Mart and Las Vegas have taken the process to new heights in their respective spheres, this by no means suggests that either McDonalds or Disney are no longer relevant in theirs. Extending both streams into on-line realm, both AOL and WoW become the exemplars of the cybered versions of the processes. While both Ritzer and Bryman look to cybered analogues of malls and casinos such as Amazon.com as emblematic of dematerialized consumption, they neglect games as one of these potential sites. Within the online realm, firms such as Google may be singularly dominant within their respective spheres (and indeed, several authors have commented on the Google-ization of various spheres (Mckay, 2009)), but AOL and WoW both represent the two-sides of the Ritzer-Bryman dynamic by virtue of the similarity of how they extract revenue from consumers: the monthly billing of customers for unlimited access to their service. Market Dominance Both AOL and World of Warcraft came to dominate their respective markets within a remarkably short amount of time. Both utilized elements of homogenization and heterogeneity found in Ritzer's and Bryman's parallel processes. Yet despite the differences in both the companies and their markets, the steps that were undertaken by each firm were remarkably similar. Recalling the histories of the rise of each product, four stages in which these steps took place are apparent. Each successive stage leveraged the results of the one before to produce a mutually-reinforcing system that developed and sustained growth, leading to the firms' eventual market dominance. These stages are not meant to emulate Ritzer's and Bryman's models - which outline the dimensions over which a process occurs - but rather provide an overview of the steps that occurred during the firms' respective rise. The stages of the process are simplification, saturation, engulfment, and establishment, and they occur as follows:

16


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 17

Simplification is most evident in the technical requirements of a given product. It evokes McDonaldization's notion of efficiency and predictability at every turn, providing a carefully constructed user-experience to the largest extent possible. This is evidence of a strong intentionality in the design process (Feng & Feenberg, 2008). AOL provided a simple and accessible interface that obscured the technical details of connecting to the internet, and provided a consistent graphical interface. WoW maximized its market by lowering the technical requirements relative to its competitors, and allowed Mac users to play simultaneously with those playing on Windows systems. They created a highly stylized artistically rendered gamespace, and also simplified the user interface, hiding much from novice users to facilitate easy play wherever possible. For both AOL and WoW, this stage was illustrative of Borgmannâ&#x20AC;&#x2122;s "device paradigm" of technological interaction (1984). Saturation is the next stage, and where the largest innovations by both companies took place. They both saturated the target market with tactics that were novel in their respective spheres. Disneyization's elements of consumption and merchandising begin to come into play here. The leveraging of social ties and cheap distribution allowed for a mass of potential new customers to easily access try their product for the first time. WoW leveraged their existing customer base to bring new participants to the MMO market. These new users found the simplified system created in the first stage, and this enabled a high level of retention of new users to occur. Engulfment is the stage wherein the high level of retention from the first two stages combines to achieve explosive growth, due to factors from Bryman's performativity and Ritzer's calculability. The extrinsic factors of the first two stages fall away and factors intrinsic to the product itself start to dominate. While no-one begins playing WoW with the goal of slaying Onyxia, this may become an intrinsic motivation once the virtual world is engaged with. As consumers decide they are getting good value for the money, and in-game social ties become stronger, the engulfing firms can mitigate the customer turnover or 'churn' that most firms in their markets experience. By continuing to engage in the saturation practices of the second stage, player retention allows new growth of the customer base to become additive, and not merely just a replacement of departing users. Finally, we come to establishment, where the firm comes to represent not just the majority of users within their respective markets, but also the first point of reference for many of these users. The product becomes the measure by which all competition within that market is judged - even historical antecedents come to be seen via the new lens. A Kuhnian paradigm shift has occurred within the respective spheres, and the brand name of market leader comes to be assumed as representative of their market or class of goods. Just as the iPod is symbolic of the portable mp3-player market, for a time in the mid-1990s, AOL represented the dominant paradigm with respect to internet usage (Swisher, 2003); for MMOs, World of Warcraft is the dominant paradigm. Virtual Worlds under the WoW Paradigm The implications of the paradigm shift in virtual worlds brought about by WoW's growth model can be seen in three key areas: the niche market that MMOs were prior to WoW's launch; the trend towards standardization and interoperability amongst MMOs; and the change in alternative business models and revenue streams.

17


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 18

In 2004, Everquest was king of the MMO market with 500,000 subscribers. Even with the large number of titles (Bartle, 2003; Woodcock, 2008), the total user-base for MMOs was only a fraction that of other video games. With WoW's release, the market expanded outside its traditional niche. New players were introduced to MMOs and virtual worlds from Blizzard's other titles. Macintosh users, with their large presence within the college community, finally had a viable MMO option that was native to the Mac OS and interacted with the rest of the user-base. A parallel may be seen with AOL's opening of Usenet to its users in 1993, the infamous 'September that Never Endedâ&#x20AC;&#x2122; (Raymond, 1996). While a number of Usenet groups were able to weather the storm of an influx of new users and maintain their niche within the larger internet, many were lost. It remains to be seen whether the users that WoW brought to the MMO market are will remain as consumers of other MMOs and virtual worlds, or if WoW's entrance into the market represents a November that never ended for the MMO market, with the bulk of new users drifting off for different pursuits over time? The second implication is one of standardization and interoperability. Standardization within the MMO market appears to be occurring not through an explicit process with a governing body, but via emulation and iteration. New MMO releases since 2004 have tended to mirror WoW in user-interface, on-screen layout and other design aspects, in order to attract WoW's current customers and prevent them from rejecting their offering for diverging too far from the genre conventions they are used to. As noted above, WoW represents the first experience with MMOs for many of its users, and so defines expectations those users have about virtual worlds products. The gravity of WoW's mass within the marketplace becomes a lens through which all customers expectations are filtered. Similarly, cross-MMO interoperability is some ways off yet. One of the side-effects of AOL allowing its users to cross its borders to the internet at large was the necessity for AOL to seamlessly deal with the technological systems that were already in place. There are already signs of this occurring around the margins in the MMO market, with innovations such as Sony's Station pass providing access to any game within its stable for a single monthly fee. Other interoperability initiatives such as the OpenSim project (Linden Labs, 2008) or the MPEG-V standard may provide a key alternative, but as capital is always a standardizing influence, new business models and revenue streams in the MMO market may have a greater say indicate where the future really lies. While WoW may be the dominant MMO player today, it could be the last of its kind, the largest of those charging a monthly fee for access to its servers. Already in 2009, free-to-play MMOs supported by alternative revenue streams such as advertising, retail, or micro-payment based models are on the rise (Free Realms, DDOnline) . Other models, such as lifetime subscriptions (as seen with Lord of the Rings Online) or community supported play on free servers may also find an uptake (Lees, 2006). Micro-payments by necessity require a means to handle the funds being transferred. Once this is in place, what is being transferred is less relevant. The effective market size of the MMO is restricted not by the user-base of one title, but of all titles using that model (Krugman, 1997). If multiple virtual realms are all using the same system, then the micro-payment will become a predictable, calculable and efficient resource, a homogenizing force that influences the shape and growth of those MMOs that utilize it.

18


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 19

This is not to suggest that such a change would spell the doom for the World of Warcraft. Blizzard Entertainment's stewardship of its title has been remarkably sure-footed, and any issues have been dealt with quickly and professionally. If a sea change does occur within the MMO industry, there is no reason WoW could not change with it. It may prove resilient to any underlying currents in the water, unlike AOL, which misread the signals of its industry and foundered (Swisher, 2003). Conclusion The aim has not been to suggest that World of Warcraft is the McMMO, but rather it was the MMO that was most heavily McDonaldized relative to the market at the time of its launch in 2004. The aspects of McDonaldization which WoW evoked - promoting efficiency via a simplified and intuitive UI, communicating more information about the world to the player to create a more calculable and predictable experience - all contributed to WoW’s success. Using these methods, Blizzard was able to provide a relatively consistent and homogeneous experience across the WoW user-base, and this consistency contributed to WoW’s explosive growth. Arguments that the user experience of WoW is radically different depending at the stage of the game the user is at are not without merit; however, despite how the user is playing the game - be it as a new user, a player heavily involved in PvP, or in an end-game raiding guild - there is an internal degree of consistency to the user experience within these phases. Nor am I suggesting that WoW is CyberDisneyland incarnate: all MMOs are Disney-ized versions of Benedikt’s (1991) cyberspaces to some extent. WoW simply evoked the key characteristics of this Disney-ized model better than its competitors at the time, whether it was by distinct thematic ties throughout various locales in the game, performative aspects of play on display via PvP play, or hybrid consumption and competition via the auction house mechanics. Several newer MMOs such as Free Realms, Disney's ToonTown Online, Hello Kitty Online, and Lego Universe emphasize the Disney model of controlled heterogeneity to a greater degree than WoW attempted. Whether these will be successful in their endeavor remains to be seen. Ultimately, it is these two parallel processes that led WoW, as with AOL before it, to achieve dominance in its market via four stages that could be grouped as "AOLification" or "Wowification". Intrinsic features may retain a customer, but it is extrinsic and often social reasons by which these customers are engaged in the first place. As WoW is the first point of exposure for the vast majority of all players of MMOs and virtual worlds, it is now the de facto reference point for any and all discussions involving the subject. The features and play-style of this MMO have become the lingua franca of the field, and future development becomes predicated on its models, until another homogenizing force provides a new paradigm to the field.

19


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 20

Bibliography Abbate, J. (1999). Inventing the Internet. Cambridge, MA: MIT Press. Bainbridge, S., et al. (2007) "The Scientific Research Potential of Virtual Worlds". Science317, 472 (2007); DOI: 10.1126/science.1146930] Bartle, R. (2003). Designing Virtual Worlds. New Riders Press. Bartle, R. (2004) "The Pitfalls of Virtual Property". Retrieved from http://www.mud.co.uk/richard/povp.pdf Bedbury, S. (2002) A New Brand World. New York: Penguin. Benedikt, M. (ed.) (1991) Cyberspace : first steps. Cambridge, Mass. : MIT Press. Borgmann, A. (1984). Technology and the character of contemporary life: A philosophical inquiry. Chicago: University of Chicago Press. Bryman, A. (2004). The Disneyization of society. Thousand Oaks, CA: SAGE Butler, J. (1990). Gender trouble : feminism and the subversion of identity. New York : Routledge, 1990. Cassidy, J. (2000) “Is A.O.L.’s bubble about to burst?,” The New Yorker, January 24, 2000, p. 25. Castronova, E. (2007) Exodus to the virtual world : how online fun is changing reality. New York : Palgrave Macmillan. Chapman, M. R. (2006). In search of stupidity: Over 20 years of high-tech marketing disasters., 2nded. New York: Apress. Davis, S. (2000) Brand Asset Management. San Francisco: Jossey-Bass. Douglas, M. & Isherwood, B. (1979) The world of goods. New York : BasicBooks. Ducheneaut, N., Yee, N., Nickell, E., & Moore, R.J. (2006). “Building an MMO with mass appeal: A look at gameplay in World of Warcraft” Games and Culture1(4) 281. Feng, P., & Feenberg, A. (2008). Thinking about design: Critical theory of technology and the design process. In P. Vermaas, P. Kroes, A. Light, & S. Moore (Eds.), Philosophy and design: From engineering to architecture. (pp. 105-118). New York: Springer. Fishman, C. (2006) The Wal-Mart effect : how the world's most powerful company really works-- and how it's transforming the American economy. New York : Penguin Press. Gottdiener, M. (1997) The Theming of America: dreams, visions, and commercial spaces. Boulder, CO: Westview. Hachschild, A.R. (1983) The Managed Heart. Berkley, CA: University of California Press. Hicks, M.J. (2007) The local economic impact of Wal-Mart. Youngstown, N.Y. : Cambria Press. Holt, D.B. (1995). "How Consumers Consume: A Typology of Consumption Practices". The Journal of Consumer Research, Vol. 22, No. 1 (Jun., 1995), pp. 1-16

20


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 21

Koster, R. (2006). " Measuring MMOs". Retrieved from http://www.raphkoster.com/2006/06/01/measuring-mmos/ on 10/16/2009. Krugman, P. (1991), “Increasing Returns and Economic Geography”, Journal of Political Economy99, 483-499. Lees, J. (2006). "Develop: Everything you know about MMOs is wrong - apparently" Accessed at http://www.joystiq.com/2006/07/14/develop-everything-you-know-about-mmos-iswrong-apparently-/ on 10/17/2009. Linden Lab (2008) "Linden Lab and IBM Achieve Major Virtual World Interoperability Milestone" [available from http://lindenlab.com/pressroom/releases/07_08_08] Loxley, J. (2007). Performativity. New York: Routledge. McKay, L. (2009) "The google-ization of CRM". CRM Magazine 13:11, pp. 23-26. Minsky, M. (1984) "Afterword" in Vinge, V. "True Names". Available at: http://web.media.mit.edu/~minsky/papers/TrueNames.Afterword.html Nelson, R., & Winter, S. (1982). An evolutionary theory of economic change. Cambridge, MA: Belknap Press Raymond, E. (ed.) (1996) The New Hacker's Dictionary - 3rd Edition. Cambridge: MIT Press. Ritzer, G. (1993) The McDonaldization of society. Thousand Oaks, CA: Pine Forge Press. Ritzer, G. (2000). The McDonaldization of society, New Century Ed. Thousand Oaks, CA: Pine Forge Press. Ritzer, G. (2001). Explorations in the Sociology of Consumption: fast food, credit cards, and casinos. Thousand Oaks, CA: SAGE. Schmitt, B. & Simonson, A. (1997). Marketing Aesthetics: the strategic management of brands, identity, and image. New York: Free Press. Seay, A.F., Jerome, W.J, Lee, K.S. and R. Kraut. (2004) "Project Massive: A Study of Online Gaming Communities" Conference on Human Factors in Computing Systems , April 24– 29, 2004, Vienna, Austria. Silverman, M. & Simon, B. (2009) "Discipline and Dragon Kill Points in the Online Power Game" Games and Culture,4: 4 (October 2009) pp. 353-378. Slashdot (2005) "World of Warcraft Duping Bug Found". Accessed at http://games.slashdot.org/article.pl?sid=05/07/19/1644250 on 10/17/2009. Stauffer, D. (2000). It's a wired wired world: Business the AOL way. Milford, CT: Capstone Publishing. Swisher, K. (1998). AOL.com: how Steve Case beat Bill Gates, nailed the netheads, and made millions in the war for the Web. New York: Random House. Swisher, K. (2003). There must be a pony in here somewhere: The AOL Time Warner debacle and the quest for a digital future. New York: Crown Business. Turkle, S. (1995) Life on the screen : identity in the age of the Internet. New York : Simon & Schuster.

21


Journal of Virtual Worlds Research- AOL, WoW, and the Corporatization of a Niche Market 22

van Rijswijk, J. (2008) "The origin of the online games industry, its characteristics and trends for the future." Journal of Telecommunications Management, 1: 4, pp. 374-380. Von Hippel, E. (1987) The sources of innovation.York : Oxford University Press. Wiley, H. "Welcome to the 2009 Fortune 500." Fortune, 5/4/2009, 159: 9, pp. 14-14. Retrieved from http://money.cnn.com/magazines/fortune/fortune500/2009/snapshots/2255.html Williams, D. et al. (2006) "From Tree House to Barracks: The Social Life of Guilds in World of Warcraft" Games and Culture. 2006:1. pp. 338-361. Wizards of the Coast (2004). Open Game License v1.0a Retrieved from http://www.wizards.com/d20/files/OGLv1.0a.rtf Wolf, M. J. P. (2008). The video game industry crash. In M. J. P. Wolf (Ed.), The video game explosion: A history from Pong to Playstation® and beyond(pp. 103-106). Westport, CT: Greenwood Press. Woodcock, B.S.. “An Analysis of MMOG Subscription Growth”. version 23.0, Retrieved April 8th, 2008 from http://www.mmogchart.com/charts/

22


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Barriers to Efficient Virtual Business Transactions By ArminasX Saiman, Virtual Business Owner/Operator

Abstract With the availability of business transaction capability within virtual worlds like Second Life, enterprising individuals and teams have established businesses that operate entirely within the realm of virtual reality. These wholly-virtual business operations act much like real-life businesses; they must develop and manufacture products or services, advertise, sell and fulfill deliveries. A complete lifecycle of business events takes place within the virtual world. The virtual business owner is presented with a seemingly complete set of tools to perform all actions required by each stage of the business lifecycle. However, over the past several years virtual business owners have begun to discover limitations and missing elements in these business transaction protocols. This paper will identify the more notable limitations facing todayâ&#x20AC;&#x2122;s virtual business owners. The author has owned and operated such a virtual business for over two years, beginning from sales of a single virtual product on a web-based sales service in 2006, growing to a large in-world operation covering Âź of a region and selling over 200 unique products today. In real life, the author is a senior Information Technology manager.

Keywords: Second Life; economics; business; transactions; standards.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 4

Barriers to Efficient Virtual Business Transactions By ArminasX Saiman, Virtual Business Owner/Operator

One of the more unique features of the Second Life virtual world is its economy. Based on a virtual currency, the Linden Dollar, the Second Life monetary platform provides the means to reliably purchase and receive virtual items. While interesting, it became much more important when an easy-to-use currency exchange emerged, permitting avatars to convert Linden Dollars (L$) into US dollars and vice versa. The ability to transform virtual currency into real-world money spurred the development of a wide variety of virtual businesses catering to the needs of Second Life residents. The continued expansion of some of these businesses demonstrated successful approaches to virtual business operation. Intense competition within virtual market segments caused virtual business owners to adapt and increase their effectiveness in various ways. Successful virtual business owners quickly realized that the process of business in virtual reality is in fact very similar. They must perform a repeated series of steps that gradually improve their product offerings, and ultimately their return on investment. In simplified form, the authorâ&#x20AC;&#x2122;s diagram illustrates the basic steps involved in operating a virtual (or any) business:

Figure 1: The Business Cycle

4


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 5

Second Life’s economic features would appear to permit these activities to take place, and indeed a significant amount of business activity takes place. The key features of the Second Life platform that permit business transactions include: • • • • •

Ability to mark items for sale at a specified price Ability for avatars to receive goods after their Linden Dollar account is debited during “Buy” transactions Ability to “Gift” (or transfer) funds directly to another avatar The right to retain ownership of items created A permissions regime capable of protecting the intellectual property of the maker

Nevertheless, experienced virtual business operators encounter issues and inefficiencies during the business cycle, some of which are due to the primitive state of the features and the protocols they are built upon. It is the view of the author that efficiencies in virtual business could be obtained by instituting several additional standards. In the absence of the required features and standards, virtual business owners have resorted to alternative solutions, sometimes involving complex custom made scripts and unusual manual procedures. Often these “by-passes” are unique to the store and thus are unfamiliar to visiting shoppers. Unfamiliarity is an impediment to business, as a percentage of shoppers will be baffled by even slight differences in procedure, resulting in fewer sales. The Design, Testing and Manufacturing stages of the business process generally work well, but enhancements to protocols may be required to assist the efficiency of the other stages.

Advertising Problem: Identification of Originating Store Advertising in a virtual world is notoriously difficult, but a complete analysis of that subject is beyond the scope of this document. Nevertheless, one of the most effective advertising mechanisms is “word of mouth,” in which great products are mentioned between friends, or more specifically, the objects in question are inspected to determine where they can be purchased. In fact, the inspector must execute a tortuous sequence to determine the location of the originating store. A typical word-of-mouth sequence proceeds as follows: • • • • •

A object like a shirt or a car is identified as desirable The object is edited The maker’s identity is located within the object’s properties The maker’s profile is opened The maker’s profile’s “Picks” and/or “Classified Ads” are inspected manually in an effort to determine the originating store. This is not always possible as Profile and Classified information is manually entered by the maker and may be out of date or even missing entirely.

5


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 6

Eventually, if avatars are sufficiently desperate for a purchase, a teleport to the identified store takes place where hopefully a purchase is actually made. However, in larger stores determining the location of a specific product can be quite difficult. Clearly, this process is not only laborious, but also inaccurate and with no guarantee of a successful result. A standard could evolve to overcome this scenario, where objects could more directly enable a purchase. Potential solution approaches could include: • • •

Associate the object with a “store” or business identity rather than a personal avatar identity The object contains location information for the originating store, perhaps with a linked teleport button displayed in the object’s properties. The location could be either the store’s default location, or the current location of the product on the store shelves The object itself could remain in a “for sale” state, or quickly link to an external service that presents the object for sale Sales

Problem: Custom Invoicing Makers typically sell their wares in multiple ways, beyond the simple “click to buy” method provided in Second Life. Frequently makers are requested to create one-of-a-kind items by interested customers. Custom-made products are quite profitable, but there are no standard methods of invoicing customers. Today makers use a variety of techniques, including: • •

Boxing up the custom-made products and setting the box for sale at the agreed price. This requires extra effort by the maker and also requires the buyer to virtually travel to the location where the custom box is “rezzed” Transferring a copy of the custom box to the buyer’s inventory and awaiting a gift of sufficient funds in return from the buyer. However, this requires one party to move first, and sufficient trust might not be available between parties for this to succeed

Potential solutions could include: • • •

A standard in-world escrow service that accepts items from makers and releases them to buyers when purchase conditions are met Linkage to an offline purchasing service that provides private and exclusive transactions An ability to purchase items without being in proximity to the virtual object. For example, a custom object could be offered for sale directly by an instant message purchase invitation

Problem: Shared Ownership of Salable Products Many in-world businesses are owned and operated by partnerships, and this poses a problem for revenue sharing among the partnership. The existing sales protocol assumes that there is one and only one avatar selling a given object. But in many cases partnerships exist where a complex business is composed of avatars with differing skills. How can revenue be shared when the objects being sold are owned by only one avatar?

6


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 7

Ad-hoc solutions do currently exist, but in general they are all of the form of a vending script that splits incoming payments into shares for collaborating makers. However, the script itself must be owned by one avatar, and therein lies the problem – significant trust must exist between the collaborators, lest the script or its configuration be unethically modified. Worse, there are many types of such vendor scripts that vary significantly in usage and configuration, and many product types do not show well if sold from vendor boxes. Potential solutions could include: • •

A standard revenue-splitting vendor issued by a trustworthy source, such as the Grid’s owner Ability to mark for-sale objects as being owned by a group, with a method of specifying payment splits

Problem: Gifts from One Avatar to Another While shopping may be a very popular activity in Second Life, there is an underlying assumption that is incorrect and leads to inconvenience for many: Avatars always shop for themselves. In fact, avatars frequently shop for others; they buy gifts. Sometimes this is not an issue, because purchased objects might be marked with “Transfer” permissions, and in that case they can be freely given to a gift recipient. However, many products are marked with “Copy” permissions only for various reasons. This means that once the gifter purchases the object, it cannot be given away, even though the gifter has no intention of using the item. In practice this issue is very problematic, as gifters must teleport the recipient to a store location and manually indicate which item to purchase. Another approach is to negotiate directly with the product’s maker; the maker is paid directly and asked to transfer a copy of the item to the recipient. The gift typically manifests itself for the recipient as an object offer from an unknown avatar (the maker). Sometimes these offers are refused because the recipient has no expectation of an incoming object from an unknown avatar. Surprise “Copy” presents are thus a rarity. Potential solutions could include: • •

The ability to transfer “Copy” items, perhaps by transferring all copies of the item at once, thus preserving the original intent of “Copy” permission The ability to transfer “Copy” items once after purchase

Problem: Gift cards An alternative to purchasing a gift is to provide a gift card, with which the recipient can theoretically purchase any item within the voucher’s limit at the designated store. This practice is well-understood in real-world stores, but within Second Life there are no such standards. De facto solutions are quite varied, as a variety of approaches are used to implement gift cards. The simplest is a notecard containing an explanation of the gift and redemption process in text. This notecard-voucher is given to the recipient. Meanwhile, the maker has recorded a credit for the recipient and manually decrements it as the recipient selects merchandise. This laborintensive approach does work, but is entirely non-scalable and cannot be used in larger stores. 7


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 8

More complex solutions exist, involving sophisticated vendor scripts and gift tokens as objects. Gift card solutions are unpredictable from store to store, and as a result shoppers can become confused, overwhelmed or insufficiently interested to learn how to acquire and use a gift card from an unfamiliar device. Potential solutions could include: • •

A standard scripted object representing a gift card that is associated with a specific store. Makers can provide them to gifters who fill them with the desired amount of Linden dollars. Stores could sell currency units usable only at that store. The currency would be at par with the Linden dollar

Problem: Searching for Products Shopping is a popular pastime, and is often done on a whim. But there are many times when very specific items are required. A typical scenario is the requirement for themed attachments for an event. Faced with such unusual needs, the shopping avatar must embark on a potentially endless search for the desired items. Some may enjoy a shopping challenge of this type, but many do not and become quite frustrated by the experience. Solutions employed are quite varied, ranging from brute-force store-by-store search to using the external web-based XStreetSL search facilities. If a candidate item is identified through XStreetSL, the item is not necessarily purchased on XStreetSL, as the buyer may wish to visually inspect the item in-world. To do so, the buyer must translate the XStreetSL listing to an in-world location. Sometimes this is problematic if the XStreetSL seller has incorrect listing information. The in-world search may also be used, but it is notoriously difficult to use due to limited results displayed, lack of filters, rankings that seem to be based on the wrong criteria and extraneous listings produced by vendors trying to “game” the search system. Even if a buyer has identified the store selling the desired item, the search does not end. A large store may occupy an entire sim, with shelves, walls, levels and buildings full of hundreds or even thousands of products, any one of which might be the item. In-store searching can be extremely difficult as there are no standard scanning mechanisms for locating a specific item. If the search duration persists beyond the tolerance of the buyer, they may give up and go to another store. In the real world, physics tends to geographically trap the buyer in the real world store until the item is located, whereas in a virtual world, the next store is merely a teleport away. Some stores overcome this difficulty by hiring onsite “guides” or “shopping assistants” to receive visitors and direct them to the appropriate item. However, this solution is manual, expensive and can be detrimental to future sales if inappropriately skilled individuals are hired into the role.

8


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 9

Potential solutions could include: • • • •

A vastly improved in-world search facility, perhaps aided by an ability to tag for-sale objects with useful and standard meta-data An ability to link off-world sales systems such as XStreetSL with the for-sale object’s current in-world location A standard capability to detect specified for-sale items within a scanning radius Exposure of for-sale object properties via XML, such that third parties may develop increasingly sophisticated or specialized search services

Problem: Reselling Wal-Mart and similar large distribution operations do not really exist within Second Life, even though the presence of a large centralized shopping station would vastly simplify the search for desired items. Large distribution operations do not manufacture the goods themselves, they simply acquire them from an array of suppliers and place them on their shelves – the value for shoppers being the known central location and near-guarantee of finding the goods. But such operations would have great difficulty implementing this approach, as a specific avatar must own every object. While a real-life supplier is paid per item by Wal-Mart, this is not possible in Second Life. Typically the maker is paid a one-time flat fee for the reseller’s right to sell an infinite number of copies. The maker does not necessarily receive compensation proportionate to the eventual value of the item since makers frequently do not have the skills to negotiate an appropriate payment. As a result of these difficulties few mega-stores have emerged. Potential Solutions could include: • •

An ability to sell bulk quantities of items at set rates to a reseller Automated royalty payments for resold items, based on rates established when the reseller purchases the master item from the maker Fulfillment

Problem: Returns Invariably, products may be found deficient, incorrect or simply purchased in error, resulting in a request for product return. It’s good business practice to accept returned items, as the goodwill generated usually far exceeds the value of the return. However, there are few standards in place to aid this process. Typically the maker must receive and acknowledge a manual request for a return via instant message. The maker must then verify that the product was actually purchased via a manual inspection of the transaction log, sometimes based on an inadequate description from the requestor. This process is especially difficult if the buyer and maker do not speak the same language. If the transaction is verified, then the buyer must actually return the item. However, if it’s a “Copy” item, then returns do not make sense: the buyer cannot transfer items back to the maker for verification and the maker must trust the buyer to delete all copies from their 9


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 10

inventory. In many cases the maker simply permits the buyer to keep the undesired items. Payments are returned by a manual gift payment from maker to buyer, which of course may be incorrectly entered. Potential solutions could include: • • •

Just as there is a protocol for purchasing an item, there could be a protocol for returning an item, perhaps involving a pie-menu “request return” command for items that had previously been purchased Standards regarding product naming in the transaction log could make it easier to validate the purchase A more sophisticated permission regime could permit the return of “Copy” items

Problem: Upgrades When products are enhanced and new versions are created, the maker has two possible actions: they can attempt to upgrade all previously sold versions, or they can simply place the new version on sale and existing customers may upgrade by a new purchase. In some cases, the nature of the product requires all deployed instances to be a common version and the maker has no choice but to push out the upgrade. Solutions used by makers today again are quite varied as built by many ingenious scripters. Some products will self-upgrade by enquiring against a server, which may give a new copy to the owner of the item, others may notify the buyer that a new version may be obtained by presenting themselves at a proximity sensor. Yet another approach is for the maker to run a script that transfers new versions en masse to all known customers. These inconsistent approaches are, of course, confusing to customers who may not understand what to do. Customers may end up with multiple versions of the same object and if they are not careful, they may use them incorrectly. Potential solutions could include: • • •

A built-in ability for items to upgrade themselves that could be enabled by makers Standard product naming conventions would simplify any upgrade approach An ability to easily determine the current owners of a given product

Problem: Discounts Occasionally the maker may wish to offer a discounted price for existing customers of a product. For example, a new version of a product might be sold to existing customers at half price. Imagine if two identical products were displayed, one at half price and the other at full price for new customers. New customers would simply purchase the less expensive upgrade object and gain full benefit. Other solutions become very complex, particularly if the maker wishes the customer to have a straightforward upgrade experience.

10


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 11

Potential Solutions could include: • •

A standard means of detecting the presence or ownership of a copy of a given object A product registration service in which scripts could easily determine whether an avatar has previously purchased (or received) the item

Problem: Delivery confirmation For the most part, the buying and delivery sequence works very well. However, occasionally it does not, as the grid or client software may be having difficulties. During these periods advisory messages sometimes warn buyers not to purchase items because they might pay but not receive the goods. Nevertheless, such risky purchases are still attempted and some do indeed fail. Typically the maker is contacted by the customer, who often accuses the maker of deliberately taking their money and not delivering the item. After lengthy and awkward explanations of database problems, either money is returned or another copy of the item is transferred to the customer. However, the maker never knows for certain whether the item’s purchase actually failed. The customer could simply claim so, in an effort to obtain a second copy. In practice the maker must assume the customer is honest and follow through. Potential Solutions include: • • •

A delivery confirmation sequence that records the successful delivery of each sold item More robust sales transaction handling A method of redelivering or re-running suspect transactions

Problem: Link to Offline sales Most makers have both an in-world and web-based storefront, typically at XStreetSL. Like in-world purchasing, the XStreetSL purchasing experience is reasonably well done and easy to use. However, the maker must maintain two separate inventories of products: one inworld and one on XStreetSL. If a new version is produced, then two storefronts must be updated. While this may seem to be a minor matter, it becomes larger when multiple objects are involved. For example, a maker might produce a line of clothes with separately purchasable colored versions. There might be two dozen such items. But the same scale of work to place them on sale in-world must be repeated to place all of the same objects for sale on XStreetSL. Because the linkage between XStreetSL and the Second Life grid is tenuous, twice as much work is required of the maker. Potential Solutions could include: • •

Direct mapping of properties between in-world for sale items and the external service Exposure of in-world for-sale items’ properties via XML, which could be consumed by multiple external web-based sales services

11


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 12

Tracking & Feedback Problem: Accounting & Transactions Any business owner should monitor the progress of sales in an effort to weed out poorly performing products and produce more of the best-selling items. To do so, the maker must inspect the transaction log, which is available in HTML, XLS or XML formats. Unfortunately, the transaction log is actually a mix of store-sold items and personal activities. While the data is easily obtained, it must be manually inspected to remove non-store transactions that would obscure true store activities. Some makers solve this issue by creating a separate avatar that owns all items, thus isolating the accounting information. In some cases, that solution may not be viable because the avatar’s reputation may be required for marketing reasons and must appear directly associated with the products. Potential solutions could include: • •

If items are sold by a store entity and not an avatar, then the store’s transaction log would be immediately more useful for analysis Tags applied to objects could be echoed in the transaction log, providing a means to identify not only sales items, but categorize them as well

Problem: Product Version Tracking Products are often improved over time, resulting in multiple versions of a salable object. Responsible makers attempt to track the many versions by appending a version code to the object’s name or description, but this process is entirely manual and achievable only by the most disciplined makers. Novice makers typically have no notion of the need for versions and run into complications later. Potential solutions could include: • •

A “version” property for objects separate from the name or description fields A simplified versioning service similar to those used for managing software components

Beyond Daily Business Operation Problem: Selling a business Just as products can be sold, so can an entire business. But two problems exist. First, the objects and property comprising the business are mixed with other items in the inventory of a specific avatar. Selling the business means these items have to be specifically identified and transferred to the purchasing avatar. In some cases there can be a great many items, including those not currently for sale, and the transfer exercise becomes burdensome. Secondly, two or more avatars in partnership own many businesses. In these situations, sale of a business becomes a complex matter of inventory searching, multiple object transfers and several manual money transfers, all of which offer plenty of opportunity for errors. 12


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 13

This issue increases the friction encountered by business sales, and perhaps is one of the factors preventing larger virtual businesses to emerge: a virtual business cannot easily grow by acquisition. Potential Solutions include: • •

Establishment of a business as an entity that can own objects. If this were so, then the business and all of its property and objects can more easily be sold Tagging of inventory items could enable quick identification of business-related objects

Vendors as Symptom Vendors in a virtual world are scripted objects that present multiple objects for sale. An avatar may load a vendor with a series of salable items, which then may be viewed successively by shoppers. Should the shopper wish to purchase, scripted buttons on the vendor can complete the transaction. Virtual businesses often use vendors to overcome prim limits on their parcel. Should a business owner wish to sell 500 products but have only 200 free prims available, there are few options other than resorting to vendors. The vendors enable sales to occur without incurring use of scarce parcel prims. However, there are issues with vendors from a customer point of view, some of which are a serious detriment to effective sales: • •

Vendor user controls are quite varied and unfamiliar controls mean a lower probability of sales Vendors often require shoppers to repeatedly click through long sequences of products in order to (possibly) find their desired item. However, each click sequence is typically quite slow to complete, most often due to the time required to load the product’s image texture. While some vendors intelligently pre-load upcoming textures, the process is still awkward and many impatient customers simply give up and move on Products are often shown only as an image, whereas rezzed objects can be viewed fully in three dimensions, perhaps with animated actions. Certain types of products cannot be properly inspected by image only, and thus sales are less likely

It would appear that vendors should only be used as a last resort, but in spite of the issues above there is another reason business owners often choose vendors. The vendor object’s script provides a layer in which missing business functions can be provided. The most sophisticated vendors provide advanced functions, well beyond the standard click-to-buy virtual business protocol. Advanced features might include an ability to rapidly deploy new products across multiple sales locations or tracking product sales in a manner independent from the confusion of the avatar transaction log.

13


Journal of Virtual Worlds Research- Barriers to Efficient Virtual Business Transactions 14

Some virtual business owners have come to depend on such advanced sales systems, and therefore the existence of such vendor systems is a symptom of the deficiency of standard protocols within the Second Life economic platform. It is likely that additional ideas for protocol extensions may be found by inspecting the most popular features provided by sophisticated vendor systems. Conclusion It is clear that development of large-scale virtual businesses is discouraged by these factors. If large-scale virtual business activity is desired, then these issues and others must be addressed. Several common themes repeatedly occur within the potential solutions above, suggesting these may be priority items: • • • •

Enhancing the object permissions to be more flexible Establishing a business entity capable of holding salable objects and property, and being jointly owned by several avatars Tagging of objects Exposure of for-sale object properties via XML

While a significant amount of virtual business activity currently takes place, it appears that business efficiencies could be raised through the introduction of additional standards and features for the virtual world. Further research in this area should take place, ultimately resulting in an implementation with Second Life or similar virtual worlds that enables virtual business to thrive.

14


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Synthetic Excellence: Standards, Play, and Unintended Outcomes. D. Linda Garcia and Garrison LeMasters, Georgetown University

Abstract Given the growing complexity and interdependence of the global networks, efforts are being devoted to promote interoperability and an open network environment. While supporting the overall goal of interoperability, this paper sounds a cautionary note. It argues that the value of standards is contextually based. The paper contends that, as standards efforts become increasingly focused on the upper layers of the internet, care should be taken to assure that appropriate metrics be adopted to determine the costs and benefits of these standards with respect to other realms of life. Focusing on the highest-level applications in particular, this paper examines current efforts to create standards across virtual worlds, using material from the MPEG-V working group as a case study. Advocates for these standards foresee clear economic benefits for producers and maintainers of virtual worlds, as well as for their inhabitants. We argue that such faith in the predictable outcomes of standards betrays a tendency to think of virtual worlds as the intentional outcome of rational design, as well as to misapprehend the roles of diversity and play in discrete environments. We question this narrow economic perspective. We contend that virtual world standards can only beget unpredictable outcomes, which will not only affect relationships between worlds, but inevitably within communities. To identify the costs and benefits of standards in these complex environments, all of these relationships must be considered. As importantly, we argue that virtual diversity, like biological variety, is inherently beneficial to users of synthetic worlds. To realize the benefits of what Sutton-Smith calls â&#x20AC;&#x153;the potentiation of adaptive variability,â&#x20AC;? we contend that what is needed is not standards across virtual worlds but rather a broad diversity of synthetic, discrete ecosystems.

Keywords: standards; evolution; play; interoperability; diversity. This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Synthetic Excellence 4

Synthetic Excellence: Standards, Play, and Unintended Outcomes. D. Linda Garcia and Garrison LeMasters, Georgetown University

In today’s increasingly networked society, more efforts are being devoted to promoting interoperability and an open network environment (Libicki, 2000). This growing enthusiasm for interoperability is understandable. One need only consider the economics of networks. Given the interdependencies within a network, components must work together in order for the network to function effectively. As importantly, interdependencies give rise to positive network externalities, insofar as the value of a network increases along with the number of users and applications (Shapiro and Varian, 1998; Varian, Shapiro, and Farrell, 2005). Interoperable standards are also increasingly called for, given the growing complexity and interdependence of the globally networked economy (Axelrod and Cohen, 1999). In the future economy, standards will not only serve their traditional functions of achieving efficiency, facilitating coordination, and executing control; as importantly, they will determine the structure of the ‘playing field’ on which networked transactions take place (Garcia, 2004). Many of those advocating for interoperability have placed their hopes on the internet, conceiving it as an open, end-to-end network that could seamlessly transmit information regardless of its source, the nature of the information, its means of transmission, and the user. To this end, the internet’s architects designed the network so that most of its intelligence and control functions extended outward to the users and peripherals at the edges of the network. (Communication Science and Telecommunications Board, 1994). The internet’s end-to-end architecture did not solve the problem of interoperability for long, however. Given the commercialization of the internet, and the growing diversity of its users, the consensus on behalf of the end-to-end architecture soon began to unravel. To make the most of interconnection, business users needed to enhance their services with a variety of additional functions (Blumenthal and Clark 2000). Thus, if the internet is to serve effectively as a commercial platform, additional, higher-level standards in support of middleware and software applications will be required. Not surprisingly, therefore, recent decades have witnessed not only a sharp rise in the number and types of standards forums being established (Libicki, 2000: Werle 2001; Garcia 2004; Garcia, Leickly, and Willey, 2005). While supporting the overall goal of interoperability, this paper provides a cautionary note. It argues that the value of standards is contextually based. Thus, for example, while interoperability may be highly valuable in a purely economic/commercial context, it might in fact engender some unintended, negative consequences in the political and cultural realms. On this basis, the paper contends that, as standards efforts become increasingly focused on the upper layers of the internet, care should be taken to assure that appropriate metrics be adopted to determine the costs and benefits of these standards with respect to other realms of life. Employing an interdisciplinary approach, this paper takes a first step in exploring these issues. Focusing on the highest-level applications in particular, it examines current efforts to create standards across virtual worlds, using material from the MPEG-V working group as a case study. Advocates for these standards foresee clear economic benefits for producers and maintainers of virtual worlds, as well as for their inhabitants (Sivan, 2008). We argue that such 4


Journal of Virtual Worlds Research- Synthetic Excellence 5

faith in the predictable outcomes of standards betrays a tendency both to think of virtual worlds as the intentional outcome of rational design, as well as to misapprehend the roles of diversity and play in discrete environments. We question this narrow economic perspective. Arguing that a metaverse, like all worlds, is highly complex, we contend that virtual world standards — ranging from EULAs to the software code itself — can only beget unpredictable outcomes, which will not only affect relationships between worlds, but inevitably within communities. To identify the costs and benefits of standards in these complex environments, all of these relationships must be considered (Steinkuehler, 2004). As importantly, we argue that virtual diversity, like biological variety, is inherently beneficial to users of synthetic worlds. To realize the benefits of what Sutton-Smith (1997) calls “the potentiation of adaptive variability,” we contend that what is needed is not standards across virtual worlds but rather a broad diversity of synthetic, discrete ecosystems. To make our case, we proceed as follows. First, we characterize standards and describe their role in society from the perspective of complex adaptive systems. Second, we look at how — from an historical perspective — formal standards and standard setting has evolved, emphasizing their link to the ascent of technological artifacts with the consequence that standards development concerns have generally been skewed towards relatively narrow economic criteria such as cost, competitiveness, and efficiency. Next, focusing on the case of MPEG-V, we show how this trend is being replicated today with respect to the development of standards for virtual worlds. This, we conclude, is an alarming trend that could give rise to a number of unfortunate and unforeseen consequences. To make this point, we look at the unique (some might say sacred) role of games in the realm of culture, which allow mankind to both generate and adapt to a changing environment. We conclude that designing play environments, based solely on economic criteria, might seriously undermine the innovative and adaptive role of play as well as the evolution of diverse cultures. Standards and Their Role in Complex Adaptive Systems To fully appreciate the long-terms development of standards for virtual worlds, it is necessary first to define standards and second to characterize their general role in society. In this paper, we focus on the role that standards play as interfaces between actors at all layers of a complex adaptive system, facilitating interconnection and interaction, and thereby fostering the generation of emergent properties and the evolutionary adaptation of the system itself. What Do We Mean by Standards Standards are specifications that define the relationships between the parts of any given whole. As such, standards are the rules of the game, bounding the system as well as providing both affordances and constraints to the actors/components/nodes within it. Although in the modern era we have come to think about standards in technical terms, they are first and foremost the building blocks of the social order — itself a network of networks (Kontopoulos, 1993; Sawyer, 2005; Beinhocker, 2008). For, in any given context, standards constitute an agreed upon set of meanings, scripts, and rules that guide behavior and govern relationships. Embodying critical information in highly compressed and abbreviated formats, they greatly simplify the environment. Signally opportunities and constraining choices, standards allow for cooperation and coordinated behavior to take place (Garcia, et al., 2005). 5


Journal of Virtual Worlds Research- Synthetic Excellence 6

Consider, for example, the role of language and simple gestures. Based on a common understanding, they provide the shared frame of reference and sense of reality needed for intimate relationships and the establishment of common goals. Similarly, cooperation among individuals who are engaged in interdependent activities is greatly facilitated when people do not act randomly, or on a trial and error basis, but rather when they conform to a shared set of expectations embodied in socially constructed roles (Katz and Kahn, 1978). Likewise, organizations gain greater access to resources as well as reduce their transaction costs when they adhere to standardized rules and procedures institutionalized in their environments. In so doing, organizations themselves become standardized over time, as today the prevalence of bureaucratic forms and structures clearly attest (DiMaggio and Powell, 1991). In the realm of technology, as well, standards specifications and protocols add value to system components by allowing them to interconnect and interoperate in a transparent and seamless fashion (Garcia, Kale, and Danish, 2007). By providing an overarching and common point of reference, standards help to integrate social systems. Even more important, by serving as an interface across boundaries and between and among different actors in complex systems, standards afford a mechanism for interconnection and feedback to take place, so that innovative and adaptive behaviors can emerge. To better appreciate this role, we need to look more closely at the nature and importance of complex adaptive systems. Standards and Complex Systems The term complex adaptive system is derived from complexity theory, the origins of which can be traced back to ideas and propositions associated with a broad array of disciplines, including mathematics, biology, psychology, physics, philosophy, and sociology. Although complexity analysis has yet to take the form of an all encompassing, agreed upon body of theory, the notion of a complex adaptive system — a term coined by John Holland (1995) and Murray Gell-Mann (1994) — has itself been very fruitfully employed by a number of diverse scholars in far-ranging fields (Stuart Kauffman, 1995; R. Keith Sawyer, 2005; Joshua Epstein, 2006; Eric Beinhocker, 2006; Michael Batty, 2007; and Linda Dennard et al., 2008). Surveying this diverse literature, we can best characterize complex adaptive systems by virtue of a set of common attributes that have typically been ascribed to them. Accordingly, a complex adaptive system can be said to comprise a number of interdependent, heterogeneous actors whose actions affect the behavior of all others. As importantly, because actors operate according to their own unique scripts and roles, the outcomes of their interactions are nonlinear and therefore unpredictable (Kauffman, 1995). Nonetheless, the system as a whole is emergent; changes and interactions, which are generated from the bottom up, give rise to ‘selforganization’, whereby outcomes at the macro level transcend individual actions, so they cannot (as in linear systems) be traced back to them (Kontopoulos, 1993; Monge and Contractor, 2003; Beinhocker, 2005; Sawyer, 2005). The indeterminateness and flexibility associated with complexity makes it possible for complex adaptive systems to evolve and adapt over time. In fact, as Beinhocker (2005) claims, complex adaptive systems are, by their very nature, evolutionary systems. As such, they are ideally suited to learn over time. Learning takes place when actors at the micro level strive to enhance their fitness level with respect to the context in which they operate. As Monge and 6


Journal of Virtual Worlds Research- Synthetic Excellence 7

Contractor (2003) describe, actors “follow rules that explicitly and sometimes consciously seek to improve their fitness in terms of performance, adaptability, and/or survival.” In so doing, they change the fitness levels of other actors as well as the fitness landscape of the system itself — that is to say the macro criteria by which actors in that system are evaluated (Kauffman, 1995; Beinhocker, 2005). It is in this way that actors and the system co-evolve. Viewed within the context of complex adaptive systems, standards can be considered as rules of the game in so far as they help define the fitness level, explicating the criteria for success in any given context. Moreover, standards are — like all norms and institutions — socially constructed, emerging and evolving through the interplay of social interactions, social institutions, and social norms, be they cultural, political or economic. Embedded in language, artifacts, scripts, and repertoires, standards help actors to carry out their activities and pursue their goals. As well, employing standards for their own, unique purposes, actors redefine them over time. Not surprisingly, standards have proliferated and gained importance as societal activities have become more complex (Beniger, 1986). As Emile Durkheim (1984) noted three quarters of a century ago, increased specialization and a deeper division of labor generated the need for greater integration and control, and standards provided one answer. As described below, the growing demand for standards, accompanied by unprecedented technological advances, led to the specialization of the standards setting process itself, and with it a much more ‘generalized,’ economic criteria for standards evaluation. It is the technologically-based criteria that we question in the case of virtual worlds. Formalizing Standards Through Standardization Processes That the momentum behind formal standardization processes and the shift to a focus on economic criteria should occur together with the rise of industrial technologies should come as no surprise. The idea of Progress through industrial production was at the heart of the American dream (Smith and Marx, 2007). And technical standards were essential for achieving it. Most importantly, standardization allowed for interoperable parts, which made large scale, rapid, precision manufacturing possible. Consider, for example, the case of mass production and the specialization associated with it. With specialization and a deepening of the division of labor, tasks became more interdependent, requiring greater cooperation and information exchange. As noted by Harold Williamson: Chief among the other elements in the pattern of mass production is the principle of standardization. Stemming from the rudimentary division of labor, standardization involved the continuous pursuit, and progressive realization, of the uniformity of the materials, operations and products of industry, which made possible the future division and mechanization of labor (Williamson, 1951). The relationship between standards and mass production was self-reinforcing. Further advances in precision manufacturing required the development of machine tools and precision gauges, which in turn further drove the need for standards and standard measures (OTA, 1992).

7


Journal of Virtual Worlds Research- Synthetic Excellence 8

With the growing demand and increased stakes in standards, formal organizations were established to develop them. Generating their own procedures, communication genres, and social identities during their on-going, day-to-day interactions, standard setting organizations took on a recognizable life of their own. Over time, these organizations developed a set of structural practices unique to their institutional space as well as a set of fitness criteria by which to evaluate and select standards (OTA, 1992) Despite the diversity of organizations within the standards setting environment, standards became associated with the economy, and the criteria for determining the fitness of standards converged around economic variables. Included among these fitness criteria, for example, are prospects for scale and scope economies, reduced transaction costs, lower prices, enhanced competition, competitive business strategies, innovation, and positive externalities. This technoeconomic emphasis is understandable, given the industrial context in which formal standardization emerged, together with the Government’s emphatic belief that standards should be developed in the marketplace by the private sector. Thus, most of the participants in standards processes have been industrial players. Moreover, much of the thinking about standards development has taken place within the relatively narrow discipline of economics (Landis 1987; Farrell and Saloner; Besen and Farrell, 1994). Serving not only to regulate behavior but also to constitute its very meaning, standards and standards setting bodies are a major source of power in society. For this reason, how and by whom standards are defined, and the fitness criteria used to evaluate them, is of great import, be it with respect to day-to-day social interactions or the architectural framework that defines a technology. Thus, standards setting processes must not only be efficacious, they must also be legitimate. Moreover, they must be suitable to the context at hand. While economic fitness criteria for standards have served well in governing economic interactions in the marketplace, we should not presume that market criteria are appropriate for the realm of culture and games. As described below, just as the advancement of technology provided the momentum for standardization in the industrial world, so too it is now fostering standards development in the realm of virtual worlds and video games, two realm, which although theoretically and practically distinct in some regards, are—for the purposes of this paper—both understood as sites of play. Caution is warranted at this point, however. As Duguid and Brown (2002) have pointed out with respect to the design of shopping “knowbots,” exclusively economic criteria often constitute aberrant simplifications and distortions of life. These designs can have far reaching implications because — as Winner (1986) points out — technologies typically become ‘forms of life,’ taking on a life of their own. Hence, as Winner admonishes, we must not be technological somnambulists in the face of new technology. MPEG-V is a case in point. MPEG-V In a brief “think-piece” entitled “Real Virtual Worlds SOS (State of Standards) Q3-2008,” in the second issue of the Journal of Virtual Worlds, Yesha Sivan briefly makes a case for extending standards across virtual worlds. These synthetic environments, he enthuses, “are destined to become big, in the sense of meaningful, influential, and making money [sic] for various current and new players” (2).

8


Journal of Virtual Worlds Research- Synthetic Excellence 9

But as a market, this imagined “metaverse” of worlds is inefficient: decrying individual and proprietary attempts to build and populate virtual worlds, Sivan argues that “the common public good calls for a connected system like the Internet where different forces can innovate in particular spots of the value chain” (2). Standards, he insists, are desperately needed. Sivan’s concern for the common public good is laudable but necessarily prompts the question: What is the common public good vis-à-vis online synthetic worlds? And are standards perforce the best way to safeguard that imagined good? In this section, we want to look briefly at documents surrounding the draft schema to which Sivan refers: the proposed standards for information exchange within virtual worlds now being developed by the Moving Picture Experts Group Virtual Worlds Standard (MPEG-V) working group. With attention paid solely in the technology itself, and with little apparent regard for the sociotechnological context of their project, the MPEG-V imagines the public good in a narrow, reductive, and determinist fashion. Looking briefly at their proposal for a metaverse-wide avatar standard, we ask whether imagined gains in efficiency will come at a dear cost. MPEG, an ISO/IEC working group, has been the source of many familiar standards, and argues that it is important to standardize intermediate formats and protocols for “information exchange with virtual worlds” in the areas of “interfaces between virtual worlds” and “interfaces between virtual worlds and the physical world.” Their working framework consists of three areas: The first part will describe an overall architecture that can be instantiated for all the foreseeable combinations of virtual worlds and real world deployment. The second part will allow for the interchange of characteristics between virtual worlds taking native formats and scalability into account. The third part will allow for the interfacing of sensors and actuators to the virtual world taking native formats into account. According to the “Summary of MPEG V”: [T]he ‘Information exchange with Virtual Worlds’ [1] project intends to provide a standardized global framework and associated interfaces, intermediate formats definitions and the like, to enable the interoperability between virtual worlds (as for example Active Worlds, Second Life, IMVU, Google Earth, Virtual Earth and many others) and between virtual worlds and the real world (sensors, actuators, vision and rendering, robotics (e.g. for revalidation), (support for) independent living, social and welfare systems, banking, insurance, travel, real estate, rights management and many others). But consideration of the brief document’s rhetoric reveals a process in which technological concerns trump social ones. While the document begins by characterizing virtual world technologies as components of complex social and cultural practices like “entertainment, education, training, [. . .] work, reliving the past,” and so on, any due consideration of the social nature of these systems is quickly abandoned in favor of an economic vocabulary. Citing the growing ubiquity of online gaming and virtual worlds in our lives, for example, the document assumes a singularly economic posture, “Games will be everywhere and their societal need is very big,” the authors explain, concluding “it will lead to many new products and it requires many companies.” Driven by this market logic, the document’s argument echoes early twentieth century calls for standards, as it emphasizes “efficiency,” “fast adoption,” and the need for “better tools.” 9


Journal of Virtual Worlds Research- Synthetic Excellence 10

While there is nothing surprising about justifying the move toward standards in exclusively economic terms, we contend that the argument for technological efficiency conveniently ignores the messy social contexts within which we adopt, make use of, and are shaped by these tools. Once economics becomes the dominant logic, imagined demand is met by imagined supply and the question of the social is abandoned, “It is envisaged that the most important developments will occur in the areas of display technology, graphics, animation, (physical) simulation, behavior and artificial intelligence, loosely distributed systems and network technology.” Technology is “envisaged” as though in a vacuum. When human beings finally resurface in the MPEG-V’s considerations, they are reduced to mere consumers, as the user is given tools to preserve “value invested” in his avatar. As we have suggested, this rhetoric is historically part of the logic of standards. We do not doubt that everyone who has contributed to the MPEG requirements discussion is enthusiastic about the social and cultural opportunities these technologies offer. But whatever their intentions, as the Avatar Characteristics XSD repeatedly demonstrates below, the industrial-era emphasis on efficiency, coordination, and control mean that social and cultural criteria are effectively divorced from technical considerations. The Avatar Characteristics XSD The Avatar Characteristics XSD (XML Schema Document) is the core technology by which the MPEG-V proposes to standardize virtual worlds: not by standardizing the worlds, per se, but by creating a comprehensive document that standardizes aspects of the player’s in-world representative, her avatar, in minute detail, across the categories of appearance, animation, communication skills, personality, and control. It seems likely that the MPEG working group approached the matter in a fashion they believed would allow each world to preserve its unique identity: these standards do not address the worlds themselves, only the movement between them. But what are worlds other than the people who comprise them? And what are the societies that comprise these worlds, other than bodies of rules and norms? Of course, there are infinite combinations of characteristics available within the schema as defined. But no easily generalized and finite descriptive schema could possibly account for the infinitely malleable schemas of specific worlds’ discreet descriptions of their avatars. The schema is a vector for rigidity and the end of adaptability in virtual worlds and online games. We believe that the avatar schema imposes unwelcome finitude on every world in the metaverse. The proposed XSD stipulates that an avatar’s gender, for example, is to be either Male, Female, or Undefined; there are no other options. Without some requisite biological real-world referent (both men and women frequently play avatars opposite their own genders), the stipulation of a static, binary gender seems poorly conceived, and illustrates the limits these rules immediately impose. Suddenly, worlds like those depicted in novels like Ursula K. LeGuin’s Left Hand of Darkness or Jeffrey Eugenides’s Middlesex become entirely unthinkable. Whatever insight these worlds offered readers of fiction becomes lost to online worlds. The in-world recreation of divinities like Ardhanarisvara and Hermaphroditos becomes impossible. And intersex identity is consigned to the non-category of an “Undefined.”

10


Journal of Virtual Worlds Research- Synthetic Excellence 11

In terms of their overt racial characteristics, avatars are described by a single element in the XSD called SkinPigment, which comprises six named elements: Very Light, Light, Average, Olive, Brown, and Black. Again, given the absence of real-world referents, these seem strangely arbitrary, and smack of bias: The element “Average” recalls an era when the pink Crayola crayon was labelled “Flesh.” In contrast to the nominal elements of SkinPigment, though, consider the complexity of the avatar’s hair specifications. Where skin color is one of merely six named elements, hair is defined by no fewer that 32, including amount of white hair (WhiteHair), amount of blonde hair (BlondeHair), amount of red hair (RedHair), as well as a variable hair color (HairColor) that can be set to any of 65,000 different values. Defined Animation elements dictate the actions that the avatar will be able to port from world to world. As they are defined by the XSD, there are several dozen predefined actions, including one for yoga, one for surfing, and one for throwing up. At the same time, there are separate, predefined elements reserved for animating the avatar as she aims a bow and arrow, as she aims a handgun, as she aims a rifle, and as she aims a bazooka. For greeting another avatar, there are eight animations defined by the XSD; for dancing, there are eleven animations; for fighting with another avatar, there are at least fifty-eight animations. In sum, we see these definitions for avatars as arbitrary and determinist. Invented to satisfy commercial needs, even a cursory review reveals them to be inflexible in terms of gender and race, two enormously complex and variable categories of human identity. Further, resources within the XSD seem predisposed to violence, while resources devoted to personal expression receive considerably less attention. We are under no illusions about the frequently violent nature of activity in virtual worlds. This standard, however, privileges acts of violence over any other. The Mangle of Practice, The Mangle of Play In addition to addressing the explicit intent of the MPEG-V, it is instructive to consider the issue of unintended outcomes. Within and around game worlds and virtual realities, there has always been intense conversation about the digital rulesets that shape them. Independent wikis, blogs, and chat boards like wow.com and massively.com are dedicated to unpacking, cataloguing, and debating the hard-coded rules that give form to some of the more populous worlds, like Blizzard’s World of Warcraft, Makena’s There.com, and Linden Lab’s Second Life. Even mainstream sites like CNET.com and engadget.com regularly cover virtual world software client updates and the debates among players that even miniscule rule-changes can engender. Players’ focus on the rules themselves is understandable, but wrongheaded: no matter how informed and finely grained these conversations, the exclusive focus upon rule sets ignores the emergent complexity of game environments (Steinkuehler, 2004). With a careful examination of the unanticipated effect of Chinese gold farmers in the massively multiplayer online role-playing game (MMORPG) Lineage II, Steinkuehler observes that in the virtual environment, hard-coded rules represent merely one system in a complex ecosphere. “In-game communal norms,” she writes, “amplify, enhance, negate, accommodate, complement, and at times even ignore hardcoded game rules” (200). Borrowing from Pickering (1995), Steinkuehler argues that synthetic worlds represent a “mangle of production and consumption — of human intentions, [. . .] material constraints and affordances, evolving sociocultural practices, and brute chance” (Steinkuehler, 2006).

11


Journal of Virtual Worlds Research- Synthetic Excellence 12

Thus, the injection of standards into environments like these is likely to meet with unintended consequences. Steinkuehler observes that “the ways in which a game gets played out [or a virtual world is used] on the ground level are not easily determined a priori by the game design, rules, EULAs, or whatnot. They shift and evolve, often in unpredictable directions” (211). Steinkuehler refers to this phenomenon as “the mangle of play”: This is why we need to understand the emergent game cultures within virtual worlds and not simply the designed objects that hit the shelves. This is also why we might consider the legal regulation of games as not merely a matter of intellectual property rights... but also perhaps as the philosophical and ethical issue of self-governance of societies that inhabit virtual kingdoms that are corporate owned but player constituted (211). Beyond the matter of narrow determinism and the dilemma of unanticipated outcomes, however, there is a third issue that demands consideration. We have suggested that the proposed avatar schema is unnecessarily rigid and that this inflexibility precipitates not only the diminution of player choices, but threatens the end of adaptability within virtual worlds. It is important to address the matter of adaptability, and suggest why it is such a significant aspect of online games and synthetic environments. Adaptability, Play, and the Sacred The current discourse on virtual worlds and videogames is blunted by our misapprehension of these technologies as banal sites of worldly re-production and mimesis. Corporate interest in so-called “serious games,” advertisements touting hyper-realistic graphic and lighting algorithms, debates over the psychic effects of on-screen violence: all of these discourses ignore the intrinsic ludic, or playful, nature of these environments. We play in these worlds. As sites of “play,” these synthetic worlds temporarily separate the user from quotidian experience, exchanging the vast array of social rules and norms under which we all live daily for a streamlined, arbitrary, temporary rule set. Play is the only suitable way to engage with games and virtual worlds: Because they are voluntary and delimited, they are sustained solely by the free will, or the “lusory attitude,” of the participants (Suits, 2005). In the west, any serious consideration of play is a challenge. Plato spurned it; Rome condemned it; Calvin taught that work, not play, was the will of God; industrialization disciplined its workers, relegating play to the weary after-hours. As inheritors of a Puritan work ethic, we are suspicious of play because, for all its volume and bombast, it remains ephemeral and, by appearances, inconsequential. Play is the activity of children and the idle. Beyond a little exercise or improved eye-hand coordination, games are non-productive: play is its own reward, “an occasion of pure waste” (Caillois, 2001). And yet, play is imbricated with the sacred, the linkage buried in our language and in our games themselves. The role of dice in divination, for example, lies latent in the word “die,” plural “dice,” from the Low Latin dadus, meaning “given,” or “that which is given by the gods.” Before the modern invention of “random outcomes,” humankind regarded the roll of the dice as an opportunity for the gods to make their wills known: Tools like dice, yarrow stalks, astragali (knucklebones), and dominoes were the sacred, subtle instruments of faith. In the Norse tradition, it is Odin, the All-Father, who invents dice for his children so that he may better 12


Journal of Virtual Worlds Research- Synthetic Excellence 13

communicate with them. In Greece, it is Hermes who invents them: Hermes, who is not only the messenger of the gods, but later the patron saint of gambling. The unifying sacrality of play is always a localized, situated phenomenon. In his book Homo Ludens, Dutch historian Johan Huizinga tells us that “Human play belongs to... the sacred sphere.” A sacred site, he writes: [C]annot be formally distinguished from the play-ground. The arena, the card-table, the magic circle, the temple, the stage... are all in form and function play-grounds, i.e., forbidden spots... within which special rules obtain. All are temporary worlds within the ordinary world dedicated to the performance of an act apart (10). Sociologist Roger Caillois agrees: In religious ceremony, he writes: [A]n enclosed space is delimited, separated from the world and from life. In this enclosure, for a given time, regulated and symbolic movements are executed, which represent or reincarnate mysterious realities in the course of the ceremonies... [This is] just as in play, [where] the opposing qualities of exuberance and regimentation, of ecstasy and prudence, and of enthusiastic delirium and minute precision, are present at the same time (207-8). It is in this rarefied atmosphere, freed from the onerous burden of mere being, that men and women can pose the question, “What if?” Huizinga and Caillois argue for the play of an archaic past as fundamental to the instantiation of civilization itself. Huizinga tells us that “culture arises in the form of play [. . .] it is played from the very beginning” (46). Our denigration of play is recent, he says, and dangerous. Recent scholarship has taken Huizinga’s tacitly evolutionary framework to its logical conclusion. Looking carefully at the way we play, and the way we talk about play, Brian SuttonSmith (1997) sees in play an imitation of the evolutionary process itself, in which mankind models his own biological character (229). Drawing heavily on the work of Stephen Jay Gould, Sutton-Smith argues that play’s nature — quirky, redundant, flexible — is the key to evolutionary success. He writes, “I define play as a virtual simulation characterized by staged contingencies of variation, with opportunities for control engendered by either mastery or further chaos” (231). It is at this powerful intersection — of “mastery and chaos,” of “ecstasy and prudence,” of abandon and control — that societies change, adapt, and thrive. This sacred ludic tension is where innovation begins. To impose the arbitrary limitation of standards across all virtual worlds is perforce to reduce the variability of these virtual ecosystems, and impoverish thereby the excellence (Gould, 1991) of play’s adaptive potentiation.

13


Journal of Virtual Worlds Research- Synthetic Excellence 14

Bibliography Axelrod, R. and Cohen, M. D. (2002). Harnessing Complexity: Organization Implications of a Scientific Frontier. New York: Basic Books. Batty, M. (2007). Cities and Complexity: Understanding Cities with Cellular Automata: Agentbased models, and fractals. Cambridge: MIT Press. Beinhocker, E. D. (2006). The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics. Boston: Harvard Business School Press. Beniger, J. (1986). The Control Revolution: Technology and the Economic Origins of the Information Society. Cambridge: Harvard University Press. Berkman Center for Internet and Society. (2005). A Roadmap for Open ICT Ecosystems. New Haven: Berkman Center. Besen, S. and Farrell, J. (1994). Choosing how to compete: Strategies and tactics in standardization. Journal of Economic Perspectives, 8 (2): 117-31. Blumenthal, M. and Clark, D. D. (2001). Rethinking the design of the internet: The end-to-end arguments vs. the brave new world. ACM Transactions on Internet Technology, 1, (1): 70109. Brown, J. S. and Duguid, P. (2002). The Social Life of Information. Cambridge: Harvard Business School Press. Caillois, R. (2001). Man, Play, and Games. Urbana: University of Illinois Press. Cargill, C. (2002). â&#x20AC;&#x153;Uncommon commonality: A Question for Unity in Standardization,â&#x20AC;? in S. Bolin, ed. The Standards Edge (S. Bolin, ed.). Ann Arbor: The Bolin Group. Chapter 3. Castronova, E. (2007). Exodus to the Virtual World. New York: Palgrave. Cochrane, R. C. (1996). Measures for Progress: A History of the National Bureau of Standards. Washington D.C.: National Bureau of Standards. Computer Science Research Board. (1994). Realizing the Information Future: The Internet and Beyond. Washington, D.C.: National Academy Press Congressional Research Service, Science Policy Division. (1974). Voluntary Industry Standards in the United States: An Overview of Their Evolution and Significance for Congress. Report to the Subcommittee on Science, Research, and Development on the Committee on Science and Astronautics, US House of Representatives, 93rd Congress, 2nd session. Dennard, L., Richardson, K. A., and Morcol, G. (2008). Complexity and Policy Analysis: Tools and Methods for Designing Robust Policies in a Complex World. Goodyear: ISCE Publishing. DiMaggio, P. J. and Powell, W. W. (1991). The iron cage revisited: International isomorphism and collective rationality. The New Institutionalism in Organizational Analysis (W. Powell and P. DiMaggio, eds). Chicago: Chicago University Press. 63-82. Durkheim, E. (1984). The Division of Labor in Society. New York: The Free Press. Epstein, J. M. (2006). Generative Social Science: Studies in Agent Based Computational Modeling. Princeton: Princeton University Press. 14


Journal of Virtual Worlds Research- Synthetic Excellence 15

Farrell, J. and Saloner, G. (1987). Horses, penguins and lemmings. Product Standardization and Competitive Strategy (H. L. Gabel, ed.). Amsterdam: North Holland. Gable, H. L., ed. (1987). Product Standardization and Competitive Strategy. Amsterdam: North Holland. Garcia, D. L. (2004). Standards for standard setting: Contesting the organizational field. The Standards Edge. Ann Arbor: The Bolin Group. Garcia, D. L., Leickly, B. L. and Wiley, S. (2005). Public and private interests in standard setting: Conflict or convergence. The Standards Edge: Future Generations. Ann Arbor: The Bolin Group. Gell-Mann, M. (1994). The quark and the jaguar: Adventures in the simple and the complex. New York: W. H. Freeman. Gould, S. J. (1991). Wonderful Life: The Burgess Shale and the Nature of History. London: Penguin. Holland, J. (1995). Hidden Order: How adaptation builds complexity. Reading: AddisonWesley. Huizinga, J. (1995). Homo Ludens. Boston: Beacon Press. International Standards Organization. (2009). WD2.0 of ISO/IEC 23005 MPEG-V, Avatar information. Maui. International Standards Organization. (2008). Summary of MPEG-V. ISO/IEC JTC 1/SC 29/WG 11/N9901. Archamps, France. Katz, D. and Kahn, R. (1978). The Social Psychology of Organizations. Second edition. New York: John Wiley and Sons. Kauffman, S. (1995). At Home in the Universe: The Search for Laws of Self-organization and Complexity. New York: Oxford University Press. Kontopoulos, K. M. (1993). The Logics of Social Structure. New York: Cambridge University Press. Libicki, M., Schneider, F., David, R., and Slomovic, A. (2000). Scaffolding the New Web: Standards and Standards Policy for the Digital Economy. Rand Monograph. Retrieved from: http://www.rand.org/pubs/monograph_reports/2007/MR1215.pdf. Monge, P. R, and Contractor, N. (2003). Theories of Communication Networks. New York: Oxford University Press. Office of Technology Assessment. (1992). Global Standards: Building Blocks for the Future. Washington DC: US Government Printing Office. Sawyer, R. K. (2005). Social Emergence: Societies as Complex Systems. New York. Cambridge University Press. Shapiro, C. and Varian, H. (1998). Information Rules: A Strategic Guide to the Network Economy. Boston: Harvard Business School Press. Sivan, Y. (2008). Virtual worlds research: Consumer behavior in virtual worlds. Journal of Virtual Worlds Research, 1 (2): 1-7.

15


Journal of Virtual Worlds Research- Synthetic Excellence 16

Smith, Merrit Roe, and Leo Marx, ed. Does Technology Drive History: The Dilemma of Technology Determinism, Cambridge, MA: MIT Press. Steinkuehler, C. (2006). The mangle of play. Games & Culture, 1 (3): 1-14. Suits, B. (2005). The Grasshopper: Games, Life, and Utopia. New York: Broadview Press. Sutton-Smith, B. (1997). The Ambiguity of Play. Cambridge: Harvard University Press. Varian, H., Farrell, J., and Shapiro, C. (2005). The Economics of Information Technologies: An Introduction. Cambridge: Cambridge University Press. Weber, S. (2004). The Success of Open Source. Cambridge: Harvard University Press. Williamson, H, ed. (1951). The Growth of the American Economy. New York: Prentice Hall. Winner, L. (1986). The Whale and the Reactor: A search for limits in an age of high technology. Chicago: University of Chicago Press.

16


Volume 2, Number 3 Technology, Economy, and Standards October 2009

The Role of Interoperability in Virtual Worlds, Analysis of the Specific Cases of Avatars By Blagica Jovanova, Marius Preda, Franรงoise Preteux Institut TELECOM / TELECOM SudParis, France

Abstract In this paper we present current trends in several activities related to avatars. We provide a detailed survey of research literature for avatar appearance modeling, deformation control, and animation. We also introduce several standards, recommendations, and markup languages treating different aspects of avatars from visual representation to communication capabilities. Finally, we shortly introduce the current developments of MPEG-V related to avatars, a recent MPEG standard aiming to provide an interchange format for virtual worlds.

Keywords: 3D graphics; interoperability; MPEG-4.

virtual

characters;

modeling

and

animation;

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Interoperability in VWs 4

The Role of Interoperability in Virtual Worlds, Analysis of the Specific Cases of Avatars By Blagica Jovanova, Marius Preda, Françoise Preteux Institut TELECOM / TELECOM SudParis, France Human-like representation is a practice that goes deeply in humanity’s history. For all the visual arts, including painting and sculpture, representing humans reveals the interest. Highly realistic or metaphoric representations of humans are now famous pieces of art. In the modern era, with the evolution of new technologies, the physical support for such representations smoothly changes. The first step in this evolution was first achieved by cinema, which introduced the concept of time. Similar as in the

traditional theatre, for cinema the representations are not static anymore, but rather evolve in time and move on the screen. The first 2D cartoons were then produced, bringing up a new and revolutionary concept: animation. While in a static representation depicting life requires artistic skills, when performing animation, by only adding motion it is possible to simulate lively objects or environments. The problems are here related to the need to provide static representation at very dense time samples. The evolution of computer science solves some of these problems, turning digital synthetic content into a new form of art. Within the large family of digital synthetic content types, avatars have a specific place. The reasons relate strongly to psychological and sociologic motivations (Georges, 2009) of humans to dispose of a visual representation in order to position with respect to the others and to the environment. While this property was extensively exploited in the past in successful games, the avatars representating the player in the virtual world and at the same time the interface with it, the current trend is more and more present in the recent developments of Internet, largely known by the name Web 2.0. Here the user becomes part of the big picture— he/she is an active participant that can modify, append, comment, and in general interact with the content. In many cases the user's presence in the digital world is not required to be visually signaled; only the effects of his or her intervention is observable (i.e. adding some comments on a webpage or editing a section of a Wikipedia document). In some other cases, with real-time requirements such as collaborative work or presence in 3D virtual worlds, the visual representation (i.e. the avatar) is a communication vector in itself—it helps in identification, differentiation, and identity protection. In addition, the avatar is a facilitator of communication, informing the others about status and availability. It is more and more evident that Internet will evolve from a repository of information to a dynamic and lively place where people communicate with each other and jointly interact with the content, and where time, presence, and events become important. It will contain more and more functionalities, copying, extending and enriching the ones from the real world and inventing new ones. In this context, having a visual representation of users in the form of avatars containing personal information, history, personality, skills, etc. becomes necessary. However, today's Internet (in the sense of its initial definition as interconnected sites) is not yet this lively place. There is no interoperability yet between different websites implementing Web 2.0 features—besides a very thin layer of re-using IDs (e.g., OpenID) or aggregating different sources. On the other hand, 3D Virtual Worlds (3DVWs) became a reality over the last few years. Initially conceived for social purposes being a support for communication (i.e. chatting) and offering awareness on the presence and sometimes the mood of the interlocutors, 3DVWs are now reaching a milestone: the technology for creating, representing and visualizing 3D content becomes available and widely accessible. One facilitator is the fast development of high performing 3D graphics cards (by the likes of Nvdia, ATI) and their availability in ordinary computers—a trend driven by the powerful market of computer games— making almost any Internet user a potential player of 3DVW.

4


Journal of Virtual Worlds Research- Interoperability in VWs 5

After an enormous step towards the awareness and democratization of virtual worlds provided mainly by the marketing success of Second Life, VWs are now looking for sustainable business models. The most probable situation is that, in the near future, several virtual worlds will be available, offering complementary functionalities and user experiences. The issue of interoperability between them or at least re-usability of assets, avatars, and media content will become more and more important. Standards for representation of graphics and media assets are now available, MPEG being a very active community in providing tools for compressing them. MPEG-4, one of the richest MPEG standards in terms of functionalities, contains specifications of bitstream syntax for audio, video, 2D, and 3D graphics objects and scenes. It can be used as a base layer for ensuring interoperability at the level of data representation. However, only representing the media is not enough for ensuring interoperability in VW. Recently, the MPEG committee identified addition needs and started a new work item, called MPEG-V-Information Exchange for Virtual Worlds with the goal of standardizing metadata that, combined with media representation, will ensure a complete interoperable framework. Part four of this standard deals exclusively with avatars and virtual goods. In the last two decades, the research community actively worked on developing tools for creation, representation, animation, transition, and display of avatars and several standards and recommendations were created to represent graphics objects and, in particular, avatars. In Section Two we introduce a review of avatar-related research. Let us note that current VW are populated with other types of objects than avatars, which are not directly driven by end-users. The modeling and animation of some of them (especially animals) can be very similar to the techniques used for human avatars. However, behind this type of objects there is no end-user, only computer programs to drive them. From the perspective of the current paper, an avatar can be whatever 3D object that represents and is driven by the end-user. In Section Three we introduce several existing standards and markup languages and show why they cannot provide a complete solution for ensuring interoperability at the content assets level between VWs. While some standards such as MPEG-4 or H-Anim offer a complete set of tools for representing geometry, appearance, and animation, they fail to attach semantics on top of them. On the other hand, several markup languages such as VHML (Virtual Human Markup Language) or HumanML only describe some high-level features (e.g. emotions, interactions) and do not offer a format for interchanging avatars between applications. Based on an analysis of existent VWs and popular games, tools, and techniques from content authoring packages, together with the study of different virtual human related markup languages, we derived a full XML description of avatars, currently retained in the standardization process of MPEG-V and introduced in Section Four. The main elements of the schema and its capability in representing content compliance with the current VWs are presented. In the last Section we conclude our contribution and provide directions for future research.

State of the Art in Avatars Technologies As for the general case of graphics assets, the most problematic and expensive operation when considering avatars is their creation. The day to day experience in observing human beings makes the human brain a powerful system able to observe without effort any non-natural effect of avatar modeling or animation. With respect to the avatar appearance there are two main approaches: realistic modeling and cartoon-like. While the first is judged with respect to its capability to copy the reality, for the second the interest is in deforming it (i.e. caricatures, strong appearance effects with the goal of emphasizing psychological characteristics). However, in both cases, the animation of the avatars must be as natural as possible. In the last twenty years, several models were proposed for animating avatars and the most promising are the ones trying to simulate the bio-mechanical structure of the real body, based on skeleton and muscle layers. In the remaining part of this section we introduce the major trends in avatar research literature with respect to modeling and animation.

5


Journal of Virtual Worlds Research- Interoperability in VWs 6

Avatar Modeling Two different courses can be chosen in order to build a virtual character according to the researched appearance of the virtual character (cartoon-like or realistic), and depending on the technology available to the designer. On the one hand, the designer can build interactively the model’s anatomical segments and set up the model hierarchy. In addition to creating the geometry and texturing it (both representing the avatar appearance) it is also needed to set up the skeleton and link it to the mesh. Several authoring tools and geometry generating mechanisms make it possible to model a virtual character (3DSMax, Maya). The main drawback of this method is that the result is strongly dependent on the designer’s artistic skills and experience. In addition, this procedure is tedious and time-consuming. On the other hand, a faster and proven method is the use of 3D scanners. Contrary to computeraided design, the aim of 3D scanning is to create an electronic representation of an existing object, capturing its shape, color, reflection or other visual properties. In its principle, 3D scanning is similar to a number of other important technologies (like photocopying and video capture) that quickly, accurately, and cheaply record useful aspects of physical reality. The scanning process can be structured according to the following steps: acquisition, alignment, fusion, decimation and texturing. The first step aims at capturing the geometric data of the 3D object by using a dedicated scanning device. Depending on the type of scanner used, the execution of this phase can vary considerably. Either a single scanning is enough to capture the whole object, or series of partial scans (called range maps) are needed, each of them covering a part of the object. In the latter case, range maps taken from different viewpoints have to be aligned, which is the task of the second step. This procedure can be completely automatic if the exact position of the scanner during each acquisition is known. Otherwise, a manual operation is needed to input the initial placement, and then the alignment is performed automatically. Once aligned, the partial scans need to be merged into a single 3D model (“fusion” step). Because 3D scanners provide a huge amount of data, a “decimation” step is required. For an effective use of the model, one has to reduce the size of the acquired geometric information, especially of the less significant parts of the object. Decimation software can be based on edge collapsing [Ronfard96] and error-driven simplification [Schroeder92]. The last step, “texturing”, is not mandatory for applications such as simulations in a virtual environment but, for a large array of objects, additional information about the real appearance of the object must be provided. This is usually achieved by texturing the final model, using pictures taken during acquisition. The pictures are first aligned to the geometry (manually or automatically) and then mapped to the model. A recent trend supported by the development of vision systems consists of capturing real persons by using one or several cameras and reconstructing or modifying an existing template by using real measurements. In the case of monocular images such as in Hilton (1999) the geometry obtained is mapped on a previously created model, providing a cheap and useful approach for automatic modeling. By using stereo or general multi-view systems, the 3D geometry may be recovered more accurately. One method consists of computing the disparity map from a stereo pair of images and some local differential properties of the corresponding 3D surface such as orientation or curvatures. The usual approach is to build a 3D reconstruction of the surface(s) from which all shape properties will then be derived. In Devernay (1994) a method directly using the captured images to compute the shape properties is proposed. Some more recent techniques, such as Nebel (2002) can obtain both the geometry and the skeleton. When more cameras are used, such as described in D’Apuzzo (1999), 3D least squares matching techniques can be employed to obtain the geometry and skeleton.

6


Journal of Virtual Worlds Research- Interoperability in VWs 7

Introducing anthropometry (studying and collecting human variability in faces and bodies) in computer graphics (Dooley, 1982) made possible the creation of a parametric model defined as a linear combination of templates. The basis is extracted from large databases including human measurements such as NASA Man-Systems Integration Standard [NASA95] and the Anthropometry Source Book [NASA78], and several methods in exploiting it are provided (Seo, 2002; DeCarlo, 1994]. An alternative method consists of defining a default model a priori and declaring on it anthropometric parameters with which the model can be deformed; the deformation on the model can be rigid, represented by the corresponding joint parameters, and elastic, which is essentially vertex displacements. Then a dataset with relation between the value of these parameters and shapes of corresponding models is created; from this dataset interpolators are formulated for both types of deformations. Joint parameters and displacements of a new model are created just by applying the interpolators on a template model with new measurements. In Seo (2003) instead of statistically analyzing the anthropometric data, direct use of captured sizes and shapes of real people from range scanners is used in order to determine the shape in relation to the given measurements. In former published papers, avatar motion models were based on simplified human skeleton with joints. In the early nineties, one of the first challenging methods for creating the human skeleton was published in Magnenat (1991), based on previous research for the hand skeleton (Gourret, 1989). They observed that existing avatar skeletons were more suitable for robots than for humans, thus a new skeleton layer was proposed. The trend was continued in (Monheit, 1991) with emphasis on providing more realistic effects for the torso that could be bent or twisted and (Scheepers, 1996) for the forearms and hands pronation and supination. A more recent model is detailed in Savenko (1999), based on initial investigations reported in Van (1998; 1999) where the focus is on improving the joint model and especially the knee kinematics. Performing realistic deformation was achieved by adding new layers in addition to the skeleton, namely muscle, fatty tissue, skin and clothing (Waters, 1989; Chadwick 1989; Scheepers, 1996; Singh, 1995). In Scheepers (1997) and Wilhelms (1997), the muscle layer is linked to the skeleton and is based on the anatomy of skeletal muscles. In Chen (1992), a finite-element model is presented, able to simulate the force of few individual muscles. In (Singh, 1995), a skin layer is attached to the skeleton layer, thus more local effects become visible. Once the virtual character has been created, one should be able to change its postures in order to obtain the desired animation effect. The following section addresses the problem of virtual character animation and presents the main approaches reported in the literature.

Avatar animation Animating a virtual character consists in applying deformations at the skin level. The major 3D mesh deformation approaches can be classified into the following five categories: • • • • •

Lattice-based (Maestri, 1999). A lattice is a set of control points, forming a 3D grid, which the user modifies in order to control a 3D deformation. Points falling inside the grid are mapped from the unmodified lattice to the modified one using smooth interpolation. Cluster-based (Maestri, 1999). Grouping some vertices of the skin into clusters enables to control their displacements by using the same parameters. Spline-based (Bartels, 1987). Spline and, in general, curve-based deformations allow deforming a mesh with respect to the deformation of the curve. Morphing-based (Blanz, 1999). The morphing technique consists in smoothly changing a shape into another one. Let us mention that such a technique is very popular for animating virtual human faces from pre-recorded face expressions. Skeleton-based. (Lander, 1999). The skeleton is a hierarchical structure and the deformation properties can be defined for each element of this structure.

7


Journal of Virtual Worlds Research- Interoperability in VWs 8

The first four categories are used in animating specific objects, such as eyes (lattice), and facial expressions (morphing), and are more or less supported by the main animation software packages. The last category, more and more encountered in virtual character animation systems, introduces the concept of skeleton. To design the virtual character skeleton, an initialization stage is necessary: the designer has to specify the influence region of each bone of the skeleton as well as a measure of influence. This stage is mostly interactive and recursively repeated until the desired animation effects are reached. When the skeleton moves, the new position of the vertex is calculated by multiplying the old position with the weights and matrices of the parent bones. While simple and easy to implement, the technique has some limitations, especially when animating soft body (the elbow problem). To overcome these problems, the basic technique was extended by different researchers. Lewis (2000) presented a solution in which they use different poses for the extreme situations where the skeleton animation failed. They save the information of these poses and associate it with the bone. This technique was improved by Kry (2002) by using Principal Component Analysis (PCA) to construct an error-optimal Eigen displacement basis for representing the potentially large set of pose corrections. The calculation is not done on the entire surface, but it is separated on more influence domains, thus optimizing it for use graphics hardware. [Wang02] proposed an alternative solution: instead of using one weight for each bone, weights are used for each component of the bone matrix. The weighting is done in a process in which the character is first animated, and then the weights are adjusted only for the problematic poses. [Mohr03] proposed a technique that improves the skeleton driven deformation by automatically adding new joints between existing ones to solve the problems in the extreme poses. Since skeleton-based is the most used deformation model for avatars, we describe in the remaining part of the section what the approaches for animating the skeleton are. They can be classified into two categories: computer generated (kinematic, dynamic) and motion capture-based. A summary is illustrated in Table 1. The kinematic approaches take into account critical parameters related to the virtual character properties such as position, orientation, and velocity. One of the classic solutions is to directly control the relative geometric transformation of each bone of the skeleton. This approach, also called Forward Kinematics (FK), is a very useful tool for the designer (Watt 1992) to manipulate the postures of virtual characters. The animation parameters correspond to the geometric transformation applied to each bone of the skeleton. An alternative approach is to fix the location in the world coordinates for a specific level of the skeleton, so-called end-effector (e.g. the hand for a human avatar), and to adjust accordingly the geometric transformation of its parents in the skeleton. With this method, also called Inverse Kinematics (IK), the animation parameters correspond to the geometric location of the end-effector. Dynamic approaches refer to physical properties of the 3D virtual object, such as mass or inertia, and specify how the external and internal forces interact with the object. Such physics-based attributes have been introduced since 1985 in the case of virtual human-like models (Armstrong, 1985; Wilhelms, 1985). Extensive studies (Badler, 1995; Boston, 1998] on human-like virtual actor dynamics and control models for specific motions (walking, running, jumping, etc.) (Pandy, 1990 & 1999; Wooten, 1998) have been carried out. Faloutsos et al. (2001) proposed a framework making it possible to exchange controllers (i.e. a set of parameters) to drive a dynamic simulation of the character. The controller evolution is obtained by using the goals of the animation as an objective function. The results are physically plausible motions. Even if some positive steps have been achieved for specific motions, to simulate dynamically articulated characters displaying a wide range of motor-skills is still a challenging issue.

8


Journal of Virtual Worlds Research- Interoperability in VWs 9

The motion capture technique (Menache, 2000) consists of tracking and recording the position (and the orientation) of a set of markers placed on the surface of a real object. Usually, the markers are positioned at the joints. The markers’ positions, expressed in the world coordinate system, are then converted into a set of geometric transformations for each joint (Badler, 1993; Hirose, 1998; Molet, 99). Motion capture technologies are generally classified into active and passive sensor-based capture according to the nature of the sensors used. With an active sensor-based system, the signals to be processed are transmitted by the sensors, while, in a passive sensor-based system, they are acquired by light reflection on the sensors. With respect to the nature of the sensors, the active sensor-based systems can be one of the following: mechanic-, acoustic-, magnetic-, optic- and inertial-based. One of the earliest methods, using active mechanical sensors (Faro) is a prosthetic system. This is a set of armatures attached all over the performer’s body and connected with a series of rotation and linear encoders. Reading the status of all the encoders allows for the analysis of the performer’s postures. The so-called acoustic method (S20sd) is based on a set of sound transmitters attached to the performer’s body. They are sequentially triggered to emit a signal and the distances between transmitters and receivers are computed from the time needed for the sound to reach the receivers. The 3D position of the transmitter, and implicitly of the performer’s segment, is then computed by using triangulation procedures or phase information. Systems based on magnetic fields (Ascension, Polhemus) are made of one transmitter and several magnetic-sensitive receivers attached to the performer’s body. The magnetic field intensity is measured by the receivers so that the location and orientation of each receiver are computed. More complex active sensors are based on fiber optics. The principle consists in the measure of the light intensity passing through the flexed fiber optics. Such systems are usually used to equip datagloves as proposed by VPL.1 The last method using active sensors is based on inertial devices, such as accelerometers, small devices which measure the acceleration of the body part to which they are attached (Aminian, 1998). When using active sensors, the performer is burdened with a lot of cables, limiting his motion freedom. In this context, the recent developments of motion capture systems using wireless communication are very promising (Ascension, Polhemus). The second class of motion capture techniques uses passive sensors. One camera, coupled to a set of mirrors properly oriented, or several cameras allow for the 3D posture reconstruction from these multiple 2D views. To reduce the complexity of the analysis, markers (light reflective or LEDs) are attached to the performer’s body. The markers are detected on each camera view and the 3D position of each marker is computed. However, occlusions due to the performer’s motions may get in the way. Additional cameras are generally used in order to reduce the loss of information and ambiguities. Since 1995, computer vision-based motion capture has become an increasingly challenging issue when dealing with the tracking, posture computation and gesture recognition problems in the framework of human motion capture. The techniques can use only one image or sequence of images. Moeslund (2006) classifies them in three main branches: model-free, indirect model use, and direct model use. Model-free methods have no previous knowledge of the model so they take a bottom-up approach to track 1

VPL Research Inc. Dataglove Model 2 Operation Manual, January 1989.

9


Journal of Virtual Worlds Research- Interoperability in VWs 10

and label body parts in 2D. Indirect model methods use look-up a table to guide the interpretation of measured data. Direct model methods use previous knowledge of the model and try to use the data to find the model on the image. Included here are learning based methods that use training of the system with known poses (Agarwal, 2006). Table 1. Summary of main animation techniques for avatars. Computer Generated

Kinematic

Motion Capture

Dynamic Active sensors

Passive sensors

Forward Kinematic Inverse Kinematic Mechanic Magnetic Optic Inertial Reflective markers LEDs Computer vision

Standards, Recommendations and Markup Languages Related to Avatars Other than the research community, the avatars have interested different standardization groups mainly due to the huge potential of applications involving them.2 There are currently two types of such standards: the ones interested in the appearance and the animation of the avatar in the 3D graphics applications (the avatars as representation objects) and the ones interested in avatars characteristics such as personality and emotions (the avatars as agents). In addition, there are several proprietary formats, imposed as de facto standards by the authoring tools or virtual world providers. All this multitude of solutions makes it impossible today to imagine even the simple scenario of using a single avatar for visiting two different virtual worlds. In this section we briefly introduce some of the standards from each category and give the main motivation behind MPEG-V.

Avatar Representation Standards In the last decade, several efforts have been made to develop a unique data format for 3D graphics. In the category of open standards, X3D (based on VRML) and COLLADA are the best known, the latter being probably the most adopted by current tools. While COLLADA concentrates on representing 3D objects or scenes, X3D pushes the standardization further by addressing user interaction as well. This is performed thanks to an event model in which scripts, possibly external to the file containing the 3D scene, may be used to control the behavior of its objects. While the avatars in VRML/X3D are defined as specific objects being standardized under the name of H-Anim,3 in COLLADA there is no distinction between a human avatar and a generic skinned model. Also in the category of open standards, but specifically treating the compression of media objects, there is MPEG-4. Built on top of VRML, MPEG-4 contained, already in its first two versions (ISO, 1999), tools for the compression and streaming of 3D graphics assets, enabling to describe compactly the geometry and appearance of generic, but static objects, and also the animation of human-like characters. Since then, MPEG has kept working on improving its 3D graphics compression toolset and published two editions of MPEG-4 Part 16, AFX (Animation Framework eXtension) (ISO, 2004), which addresses the requirements above within a unified and generic 2

Gartner Says 80 Percent of Active Internet Users Will Have A "Second Life" in the Virtual World by the End of 2011, http://www.gartner.com/it/page.jsp?id=503861. 3 H-Anim â&#x20AC;&#x201C; Humanoid AnimationWorking Group

10


Journal of Virtual Worlds Research- Interoperability in VWs 11

framework and provides many more tools to compress more efficiently more generic textured, animated 3D objects. In particular, AFX contains several technologies for the efficient streaming of compressed multi-textured polygonal 3D meshes that can be easily and flexibly animated thanks to the BBA (BoneBased Animation) toolset, making it possible to represent and animate all kinds of avatars. While offering a full set of features allowing to display the avatars, none of the above-mentioned standards includes semantic data related to the avatar.

Agent-Related Standards and Recommendations Several recommendations, standards or markup languages are related to adding semantics on top of virtual characters, mainly to describe features that do not necessarily have a visual representation (such as personality or emotions) or to expose properties that may be used by an agent (language skills, communication modality). The Human Markup Language (HumanML) (Brooks, 2002) by Oasis Web Services is an attempt to codify the characteristics that define human physical description, emotion, action, and culture through the mechanisms of XML, RDF and other appropriate schemas. HumanML is intended to provide a basic framework for a number of endeavors, including (but, as with human existence itself, hardly limited to) the creation of standardized profiling systems for various applications. It builds a framework for describing emotional state and response of both people and avatars, laying the foundation for the interpretation of gestures for both person-to-person and person-to-computer interpretations, the encoding of gestures and expressions to facilitate the better understanding of modes of communication. EmotionML (EML) by W3C covers three classes of applications: manual annotation of material involving emotionality, such as annotation of videos, of speech recordings, of faces, of texts, etc; automatic recognition of emotions from sensors, including physiological sensors, speech recordings, facial expressions, etc., as well as from multi-modal combinations of sensors; generation of emotionrelated system responses, which may involve reasoning about the emotional implications of events, emotional prosody in synthetic speech, facial expressions and gestures of embodied agents or robots, the choice of music and colors of lighting in a room, etc. Behavior Markup Language (BML) (Vilhjalmsson, 2007) is an XML based language that can be embedded in a larger XML message or document simply by starting a <bml> block and filling it with behaviors that should be realized by an animated agent. The possible behavior elements include coordination of speech, gesture, gaze, head, body, torso face, legs, lips movement, and a wait behavior. Multimodal Presentation Markup Language (MPML) (Ishizuka, 2000) is a script language that facilitates the creation and distributing of multimodal contents with character presenter. It also supports media synchronization with character agentsâ&#x20AC;&#x2122; actions and voice commands that conforms to SMIL specification. Virtual Human Markup Language (VHML) is designed to accommodate the various aspects of Human-Computer Interaction with regards to Facial Animation, Body Animation, Dialogue Manager interaction, Text to Speech production, Emotional Representation plus Hyper and Multi Media information. Character Mark-up Language (CML) (Arafa, 2003) is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of life-like characters/agents into online applications or virtual worlds. This multi-modal scripting language is designed to be easily understandable by human animators and easily generated by a software process such as software agents. CML is constructed based jointly on motion and multi-modal capabilities of virtual life-like figures.

11


Journal of Virtual Worlds Research- Interoperability in VWs 12

Avatar Representation and Semantics: The MPEG-V Vision Despite the fact that several markup languages related to avatars and virtual agents exist, ensuring interoperability for avatars between different virtual worlds cannot be yet obtained in an easy, ready to use and integrated manner. Identifying this gap and recognizing that only the existence of a standardized format can make virtual worlds be deployed at very large scale, MPEG initiated in 2008 a new project called MPEG-V (Information exchange with virtual worlds). Concerning the avatars, the following requirements should be fulfilled by MPEG-V: 1) it should be possible to easily create importers/exporters from various VEs implementations, 2) it should be easy to control an avatar within an VE, 3) it should be possible to modify a local template of the avatar by using data contained in an MPEG-V file. In the MPEG-V vision once the avatar is created (possibly by an authoring tool independent of any VW), it can be used in Second Life, in IMVU or in any other VW. An user can have his own unique presentation inside all VW, like in real life. He can change, upgrade, teach his avatar, i.e. "virtual himself" in one VW and then all the new properties will be available in all others. The avatar itself should then contain representation and animation features but also higher level semantic information. However, a VW will have its own internal structure for handling avatars. MPEG-V is not imposing any specific constraints on the internal structure of representing data by the VW, but only proposing a descriptive format able to drive the transformation of a template or a creation from scratch of an avatar compliant with the VW. All the characteristics of the avatar (including the associated motion) can be exported from a VW into MPEG-V and then imported in another VW. In the case of interface between virtual worlds and the real world (requirement 2), the avatar motions can be created in the virtual world and can be mapped on a real robot for the use in dangerous areas, for maintenance tasks or the support for disabled of elderly people and the like. While the goal of MPEG-V is to obtain a descriptive format specifying the avatar features, it may be combined with MPEG-4 Part 16 (that includes a framework for defining and animating avatars) to provide a full interoperable solution. Defining an interoperable schema as intended by MPEG-V can be of huge economic value being one step in the transformation of current VW from stand-alone and independent applications into an interconnected communication system, similar with current Internet where a browser can interpret and present the content of any web site. At that moment the VW providers will not be anymore providers of technology, but will concentrate their efforts on creating content, once again the success key of Internet.

MPEG-V Schema description Based on an analysis of existent VWs and most popular games, tools and techniques from content authoring packages, together with the study of different virtual human-related markup languages, the current version of MPEG-V4 defines a set of metadata referring to the appearance, animation, and agentlike capabilities of an avatar. In this section we are presenting the main elements of the schema. The "Avatar" element is composed of following type of data (for a detailed explanation and for exact schema definition, please refer to Preda (2009).

4

MPEG-V was promoted as Comity Draft in July 2009.

12


Journal of Virtual Worlds Research- Interoperability in VWs 13

Appearance, Animation and Haptic Properties The Appearance element contains descriptions of the avatar's different anatomic segments (size, form, anthropometric parameters) as well as references to the geometry and texture resources. While the first can be used to adapt the internal structure of the VW avatar (personalizing it), the second can be used to completely overwrite it (operation performed when the format for the resource itself is also known by the importer/exporterâ&#x20AC;&#x201D;such as the case when using MPEG-4 3D Graphics). In addition, this element also contains characteristics of objects that are related to the avatar such as clothes, shoes, or weapons. A simple and very short example of how this elements is used in MPEG-V is given below:

<Appearance> <Body > <BodyHeight value=165 /> <BodyFat value=15 /> </Body > <Head> <HeadShape value="oval" /> <EggHead value="true" /> </Head> <Clothes ID=1 Name="blouse_red" /> <AppearanceResources> <AvatarURL value="my_mesh" /> </AppearanceResources> </Appearance> The Animation element contains a complete set of animations that the avatar is able to perform, grouped by semantic similarity (Idle, Greeting, Dance, Walk, Fighting, Actions). A special group contains common actions such as drink, eat, talk, read, and sit. As in the previous case, the animation parameters are represented in external resources, MPEG-V providing only the names of the animation sequences. A simple example of using this element is given below:

<Animation> <Greeting > <hello> my_hello.bba </hello> <wave>my_wave.bba </wave></ Greeting > <Fighting> <shoot>my_shoot.bba</shoot> <throw>my_trhrow.xml</throw> </ Fighting > <Common_Actions> <drink>my_drink.bba</drink> <eat>my_eat.xml </eat> <type>my_type.xml</type> <write> my_write.xml </write></ Common_Actions> <AnimationResources> <AvatarURL value="my_animâ&#x20AC;?/> </AnimationResources> </Animation> The Haptic properties are defined with the main purpose of simulating feed-back from VWs. If "haptic gloves" or "tactile screen" are used, the touch can be rendered as vibrations or force field. These first three elements are used in MPEG-V for ensuring portability of the avatar graphic representation between different VW.

Control The main purpose of this element is to provide the correspondence between the avatar control parameters in a VW (bones of the skeleton, feature points on the face mesh) and a standardized set of controllers (defined as an exhaustive list of bones and feature points). When an input signal (i.e. the position and orientation of a magnetic sensor) is connected to one controller, the mapping between the latter and the avatar's bone allows its animation. Knowing the correspondence between the generic skeleton and the one of the specific avatar in the VW makes it possible to map entire animation sequences (motion retargeting). A simple example is provided below:

13


Journal of Virtual Worlds Research- Interoperability in VWs 14

<Control> <BodyFeaturesControl > <headBones> <CervicalVerbae3>my_cervical_verbae_3</CervicalVerbae3> <CervicalVerbae1> my_cervical_verbae_1</CervicalVerbae1> <skull>my_skull</skull> </headBones> <UpperBodyBones> <LCalvicle>my_ LCalvicle</LCalvicle> <RClavicle>my RCalvicle</RClavicle> </UpperBodyBones> </BodyFeaturesControl > <FaceFeaturesControl > <HeadOutline> <Left X=0.23 Y=1.25 Z=7.26 /> <Right X=0.25 Y=1.25 Z=7.21 /> <Top X=2.5 Y=3.1 Z=4.2 /> <Bottom X=0.2 Y=3.1 Z=4.1 /> </ HeadOutline > </FaceFeaturesControl > </ Control> This element ensures controlling the avatar from external signals and motion retargeting. Communication Skills and Personality CommunicationSkills (Oyarzun, 2009) defines the way that avatar is ableâ&#x20AC;&#x201D;or wantsâ&#x20AC;&#x201D;to communicate with other avatars. The way of communicating is characterized by the input and output abilities, both for verbal and non-verbal communication. It contains sub-elements that describe language, verbal or sign, that the avatar can interpret/understand, preferred mode of communication, preferred language etc. A simple example is given below:

<CommunicationSkillsType > <InputVerbalCommunication Voice="prefered" Text="enabled" > <Language Name="French" Preference="Text" /> <Language Name="English" Preference="Voice" /> <Language Name="Italian" Preference="Voice" /> </InputVerbalCommunication> <OutputVerbalCommunication Voice="prefered" Text="enabled" > <Language Name="French" Preference="Voice" /> </OutputVerbalCommunication> </CommunicationSkillsType>

14


Journal of Virtual Worlds Research- Interoperability in VWs 15

Personality (Oyarzun, 2009) is described as a combination of openness, conscientiousness, extraversion, agreeableness, and neuroticism as defined in the OCEAN model (McCrae, 1992). Personality can serve to adapt the avatar's verbal and non-verbal communication style as well as modulating emotions and moods that could be provoked by virtual world events, avatar-avatar communication or the real time flow. At the time of writing this paper, MPEG-V is still in progress. Near future perspective is related to representing the avatar emotions. The final version of MPEG-V is planned for the end of 2010.

Conclusion

The advancements achieved in the last two decades in the field of avatars with respect to their visual appearance (static and animating) as well as cognitive properties make it possible to imagine today an integrated representation solution aiming to ensure one of the main requirements of interoperability between virtual worlds: being able to migrate from one world to another while maintaining the user properties. These are encapsulated together in what is commonly called today the user avatar. It is expected that each future VW will use sub-sets of user properties; probably all of them will use appearance and animation properties to ensure the visual representation. Some properties/capabilities obtained in one VW remain connected to the avatar and should be available as well to the outside worlds (real or virtual). Providing the container of such properties/capabilities is the vision of MPEG-V. By its descriptive format it aims to facilitate the deployment of VW recognizing that their success depends on maintaining acceptable development cost and one mechanism in doing so consists in ensuring interoperability between them at least at the level of avatars.

15


Journal of Virtual Worlds Research- Interoperability in VWs 16

Bibliography Agarwal, A. And Triggs, B. 2006. (2006, January). Recovering 3D Human Pose From Monocular Images. IEEE Trans. Pattern Anal. Mach. Intell. 28, 1, 44-58. Aminian, K., Andres, E. D., Rezakhanlou, K., Fritsch, C., Schutz, Y., Depairon, M., Leyvraz, P., And Robert, P. (1998). Motion Analysis In Clinical Practice Using Ambulatory Accelerometry. In Proceedings Of The International Workshop On Modelling And Motion Capture Techniques For Virtual Environments N. Magnenat-Thalmann And D. Thalmann, Eds. Lecture Notes In Computer Science, Vol. 1537. Springer-Verlag, London, 1-11. Arafa, Y., And Mamdani, A. (2003). Scripting Embodied Agents Behaviour with CML: Character Markup Language. In IUI ’03: Proceedings of the Eigth International Conference on Intelligent User Interfaces, ACM, New York, NY, USA, 313–316. Armstrong, William W. And Green, Mark W. (1985). "The Dynamics of Articulated Rigid Bodies for Purposes of Animation". Proc. Graphics Lnterfafe 85, 407-415. Ascension Technology Motionstar® Http://Www.Ascensiontech.Com/Products/Motionstar/. Badler N., Metaxas D., Webber B. And Steedman M. (1995). The Center for Human Modeling and Simulation, Presence 4, 1, 81-96. Badler N., Phillips C., and Webber B. (1993). Simulating Humans: Computer Graphics, Animation, and Control. Oxford University Press. Bartels, R. H., Beatty, J. C., And Barsky, B. A. (1987). An Introduction to Splines for Use in Computer Graphics & Amp; Geometric Modeling. Morgan Kaufmann Publishers Inc. Blanz, V., Vetter, T. (1999). A Morphable Model For The Synthesis Of 3D Faces. Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, 187-194. Boston Dynamics Inc. (1998). The Digital Biomechanics Laboratory, Www.Bdi.Com. Brooks, R., And Cagle, K. (2002). The Web Services Component Model And Humanml. Technical Report, OASIS/Humanml Technical Committee. Chadwick, J. E., Haumann, D. R., and Parent, R. E. (1989, July). Layered Construction for Deformable Animated Characters. Computer Graphics (SIGGRAPH 89 Conference Proceedings) 23, 3, 243–252. Chen, D. T. And Zeltzer, D. (1992, July). Pump It Up: Computer Animation of a Biomechanically Based Model of Muscle Using the Finite Element Method. SIGGRAPH Comput. Graph. 26, 2, 89-98. D’Apuzzo, N., Plankers, R., Fua, P., Gruen, A., Thalmann, D.: Modeling Human Bodies From Video Sequences. Videometrics VI, SPIE Proceedings, Vol. 3461, San Jose, CA, 36-47, 1999. Decarlo, D., Metaxas, D., And Stone, M. (1998). An Anthropometric Face Model Using Variational Techniques. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques SIGGRAPH '98. Devernay, F. And Faugeras, O. D. (1994). Computing Differential Properties Of 3-D Shapes from Stereoscopic Images without 3-D Models. In Conference on Computer Vision and Pattern Recognition, Seattle, WA, 208-213. Dooley M. (1982). Anthropometric Modeling Programs –A Survey”, IEEE Computer Graphics And Applications, IEEE Computer Society, 2, 9, 17-25. EML, Emotionml, Http://Www.W3.Org/2005/Incubator/Emotion/XGR-Emotionml/.

16


Journal of Virtual Worlds Research- Interoperability in VWs 17

Faloutsos, P., Van De Panne, M., And Terzopoulos, D. (2001). Composable Controllers For PhysicsBased Character Animation. In Proceedings Of The 28th Annual Conference on Computer Graphics and Interactive Techniques SIGGRAPH '01. ACM, New York, NY, 251-260. Faro Technologies, http://www.Farotechnologies.Com. Georges, F. (2009). Représentation de soi et identité numérique: analyse sémiotique et quantitative de l’emprise culturelle du web 2.0. In Réseaux: Usages du Web 2.O, 27, 154. Paris: La découverte. Gourret, J.-P., Thalmann, N. M., And Thalmann, D. (1989, July). Simulation Of Object And Human Skin Deformations In Agrasping Task. Computer Graphics (SIGGRAPH 89 Conference Proceedings) 23 (July), 21–30 Hilton, A., Beresford, D., Gentils, T., Smith, R., And Sun, W. 1999. Virtual People: Capturing Human Models To Populate Virtual Worlds. In Proceedings Of The Computer Animation (May 26 - 28, 1999). CA. IEEE Computer Society, Washington, DC, 174. Hirose, M., Deffaux, G., & Nakagaki, Y. (1996). Development Of An Effective Motion Capture System Based On Data Fusion And Minimal Use Of Sensors. VRST'96, ACM-SIGGRAPH And ACMSIGCHI, 117-123. Ishizuka, M., Tsutsui, T., Saeyor, S., Dohi, H., Zong, Y.,And Predinger, H. (2000). Mpml: A Multimodal Presentation Markup Language with Character Agent Control Functions. Web-Net. ISO/IEC JTC1/SC29/WG11. (2004). Standard 14496-16, A.K.A. MPEG-4 Part 16: Animation Framework Extension (AFX), ISO. ISO/IEC JTC1/SC29/WG11. (1999). Standard 14496-2, A.K.A. MPEG-4 Part 2: Visual, ISO. Kry, P. G., James, D. L., And Pai, D. K. (2002). Eigenskin: Real Time Large Deformation Character Skinning In Hardware. In Proceedings Of The 2002 ACM Siggraph/Eurographics Symposium On Computer Animation (San Antonio, Texas, July 21 - 22, 2002). SCA '02. ACM, New York, NY, 153-159 Lander J. (1999, May) Over My Dead, Polygonal Body. Game Developer Magazine, 1--4. Lewis, J. P., Cordner, M., and Fong, N. (2000). Pose Space Deformation: A Unified Approach to Shape Interpolation and Skeleton-Driven Deformation. In Proceedings of The 27th Annual Conference on Computer Graphics and Interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 165-172. Maestri G. (1999, July). Digital Character Animation 2: Essential Techniques. New Riders Magnenat-Thalmann, N., and Thalmann, D. (1991, September). Complex Models For Animating Synthetic Actors, IEEE Computer Graphics And Applications 11, 5, 32–44. Marcus G. Pandy and Frank C. Anderson. (1999, August). Three-Dimensional Computer Simulation of Jumping and Walking Using the Same Model. In Proceedings of the Seventh International Symposium On Computer Simulation In Biomechanics. Menache, A. (1999). Understanding Motion Capture for Computer Animation and Video Games. 1st. Morgan Kaufmann Publishers Inc. Moeslund, T. B., Hilton, A., and Krüger, V. (2006, November). A Survey of Advances in Vision-Based Human Motion Capture and Analysis. Comput. Vis. Image Underst. 104, 2, 90-126. Mohr, A. and Gleicher, M. (2003, July). Building Efficient, Accurate Character Skins from Examples. ACM Trans. Graph. 22, 3, 562-568. Molet, T., Boulic, R., and Thalmann, D. (1999, April). Human Motion Capture Driven By Orientation Measurements. Presence: Teleoper. Virtual Environ. 8, 2, 187-203.

17


Journal of Virtual Worlds Research- Interoperability in VWs 18

Monheit, G., and Badler, N. I. (1991, March). A Kinematic Model of the Human Spine and Torso, IEEE Computer Graphics and Applications 11, 2, 29â&#x20AC;&#x201C;38. NASA. (1995, July). Man-Systems Integration Standard (NASA-STD-3000), Revision B. NASA. (1978). Reference Publication 1024, The Anthropometry Source Book, Volumes I And II. Nebel J., Sibiryakov A. (2002). Range Flow from Stereo-Temporal Matching: Application to Skinning, In: Proceedings Of IASTED International Conference On Visualization, Imaging, And Image Processing. Oyarzun, D., Ortiz, A., del Puy Carretero, M., Gelissen, J., Garcia-Alonso, A., and Sivan, Y. (2009). ADML: A framework for Representing Inhabitants in 3D Virtual Worlds. In Proceedings of the 14th international Conference on 3D Web Technology (Darmstadt, Germany, June 16 - 17, 2009). S. N. Spencer, Ed. Web3D '09. ACM, New York, NY, 83-90. Pandy, M. G., Zajac, F. E. (1990). An Optimal Control Model For Maximum-Height Human Jumping. Journal Of Biomechanics, 23(12):1185â&#x20AC;&#x201C;1198, Preda M. (Ed.). (2009). Text of ISO/IEC CD 23005-4 Avatar Information, w10786, 89th MPEG Meeting, London. Polhemus STAR*TRACK Motion Capture System, Http://Www.Polhemus.Com S20SD S20 Sonic Digitizers, Science Accessories Corporation. Savenko A., Van Sint Jan S.L. and Clapworthy G.J. (1999). A Biomechanics-Based Model for the Animation of Human Locomotion, Proc Graphicon 99, Moscow, 82-87. Scheepers, C. F. (1996). Anatomy-Based Surface Generation for Articulated Models of Human Figures. Phd Thesis, Ohio State University, Adviser: Richard E. Parent. Scheepers, F., Parent, R. E., Carlson, W. E., and May, S. F. (1997). Anatomy-Based Modeling of the Human Musculature. In Proceedings Of The 24th Annual Conference on Computer Graphics and Interactive Techniques. International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 163-172. Scheepers, F., Parent, R. E., May, S. F., and Carlson, W. E. (1996, January). A Procedural Approach To Modelling And Animating The Skeletal Support Of The Upper Limb, Tech. Rep. OSUACCAD1/96-TR1, ACCAD, The Ohio State University. Seo, H. and Magnenat-Thalmann, N. (2003). An Automatic Modeling of Human Bodies from Sizing Parameters. In Proceedings of the 2003 Symposium on Interactive 3D Graphics (Monterey, California, April 27 - 30, 2003). I3D '03. ACM, New York, NY, 19-26. Seo, H., Yahia-Cherif, L., Goto, T., and Magnenat-Thalmann, N. (2002). GENESIS: Generation of EPopulation Based on Statistical Information. In Proceedings of the Computer Animation (June 19 21, 2002). CA. IEEE Computer Society, Washington, DC, 81. Singh, K. (1995). Realistic Human Figure Synthesis And Animation For VR Applications. Phd Thesis, The Ohio State University. Adviser: Richard E. Parent Van Sint Jan S.L., Salvia P., Clapworthy G.J., Rooze M. (1999). Joint-Motion Visualisation Using Both Medical Imaging And 3D- Electrogoniometry, Proc 17th Congress Of International Society Of Biomechanics, Calgary (Canada). Van Sint Jan, S. L., Clapworthy, G. J., And Rooze, M. (1998, November). Visualization of Combined Motions in Human Joints. IEEE Comput. Graph. Appl. 18, 6, 10-14. VHML, Http://Www.Vhml.Org/.

18


Journal of Virtual Worlds Research- Interoperability in VWs 19

Wang, X. C. And Phillips, C. (2002). Multi-Weight Enveloping: Least-Squares Approximation Techniques for Skin Animation. In Proceedings of the 2002 ACM Siggraph/Eurographics Symposium on Computer Animation (San Antonio, Texas, July 21 - 22, 2002). SCA '02. ACM, New York, NY, 129-138 Waters, K. (1989). Modeling 3D Facial Expressions: Tutorial Notes. In State Of The Art In Facial Animation. ACM SIGGRAPH, 127â&#x20AC;&#x201C;160. Watt, A. And Watt, M. (1991). Advanced Animation And Rendering Techniques. ACM. Wilhelms, J. And Van Gelder, A. (1997). Anatomically Based Modeling. In Proceedings of The 24th Annual Conference On Computer Graphics And Interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 173-180. Wilhelms, J. P. And Barsky, B. A. (1985). Using Dynamic Analysis to Animate Articulated Bodies Such as Humans and Robots. In Proceedings Of Graphics Interface '85 on Computer-Generated Images: The State of the Art (Montreal, Quebec, Canada). N. Magnenat-Thalmann And D. Thalmann, Eds. Springer-Verlag New York, New York, NY, 209-229. Wooten, W. L. (1998). Simulation Of Leaping, Tumbling, Landing, and Balancing Humans. Doctoral Thesis. UMI Order Number: AAI9827367, Georgia Institute Of Technology. 3dsmax, 3D Studio Max, Autodesk. Webpage: Http://Www.Autodesk.Com/3dsmax Maya, Autodesk. Webpage: Http://Www.Autodesk.Com/Maya Ronfard R. Ronfard, J. and J. Rossignac. (1996, August). Full-Range Approximation of Triangulated Polyhedra, Proceedings EUROGRAPIHCS, Computer Graphics Forum, 67-76. Schroeder, W. J., Zarge, J. A., and Lorensen, W. E. (1992). Decimation of Triangle Meshes. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques J. J. Thomas, Ed. SIGGRAPH '92. ACM, New York, NY, 65-70. Vilhjalmsson, H. and Cantelmo, N., Cassell, J., Chafai, N. E., Kipp, M., Kopp, S., Mancini, M., Marsella, S., Marshall, A. N., Pelachaud, C., Ruttkay, Z.M., ThĂłrisson, K., van Welbergen, H. and van der Werf, R.J. (2007, September). The Behavior Markup Language: Recent Developments and Challenges. In: Proceedings of the Seventh International Conference on Intelligent Virtual Agents, Paris, France.

19


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Real Standards for Virtual Worlds: Why and How? By Kai Jakobs, RWTH Aachen University

Abstract The paper gives the necessary background to those who would like to pro-actively participate in the setting of standards for Information and Communication Technologies (ICT; this includes Virtual Worlds). Some of the more tricky and confusing terms are discussed, as are the characteristics of todayâ&#x20AC;&#x2122;s ICT standardization environment. Finally, the paper gives more concrete advice on how to identify the most suitable standards setting body for a given technology to be standardized.

Keywords: ICT standards; standardization; standards setting bodies.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- Real Standards for VWs 4

Real Standards for Virtual Worlds: Why and How? By Kai Jakobs, RWTH Aachen University “… the evolution of open standards to enable interoperability between virtual worlds is one of the highest-impact, highest-uncertainty issues for the future of the market1”. The above quote says it all, really. But then again, this does not come as a big surprise. The ‘real’ world as we know it would hardly function without standards (using the term loosely), so why should the ‘virtual’ world be any different? There as well, inhabitants want to interact, create things, do business, travel between worlds (ok, that’s something we do not yet do in the real world. Then again, neither in the virtual one, due to a lack of suitable standards), etc. All these activities would suffer severely from the absence of (globally, regionally) accepted standards. Specifically, this holds for any interaction between individual virtual worlds (VWs). But also any wide-scale exploitation of VWs for business purposes will introduce problems for which standards will be required – just think, for instance, of secure transactions between virtual enterprises. However, whereas it is comparably clear which organisation is developing which standards for the real world, this does not hold for the virtual world. There are few dedicated ‘VW-standards’; VRML (Virtual Reality Modeling Language) is a notable example. Typically, one would look for potentially useful standards that have been developed for use in the real world, and try to apply them in VWs as well. The first section of the paper will generally discuss the importance of standards in the ICT (Information and Communication Technologies) sector. It will first provide a very brief historical account, then talk a bit about terminology (e.g., ‘open’, ‘standards’, ‘standardization’). I believe that this will be necessary as ‘standard’ and ‘standardisation’ are rather tricky terms, not well enough understood by many (if not most). The remainder of the paper is organized as follows. Section 2 will discuss today’s ICT standardization environment, how it came about, and how co-operation between individual Standards Setting Bodies (SSBs; this term is used to denote both formal SDOs (like ISO, and ITU-T), and standards consortia (like the W3C and OASIS)) is achieved. Subsequently, Section 3 represents an attempt at helping potential standards-setters pick the most suitable platform (i.e., SSB) for their purposes. Finally, Section 4 will offer some final remarks on the future of standards for VWs. A wee bit of history Even if we disregard social, moral and religious rules for the moment, standards – in a very general sense – have been with us for quite some time: about 5,000 years ago the first alphabets emerged, enabling completely new forms of communication and information storage2. Some 2,500 years later, the first national, coin-based currency, invented by the Lydians, established the basis for easier inter-regional and even international trading. The Industrial Revolution in the 18th century and, even more so, the advent of the railroad in the 19th century 1 2

According to SRI Consulting Business Intelligence, http://www.sric-bi.com/VWW/VWviewpoints.shtml. Adapted from [Jakobs, 2003].

4


Journal of Virtual Worlds Research- Real Standards for VWs 5

resulted in a need for technical standards, which was once more reinforced when mass production generated a demand for interchangeable parts. In parallel, the invention of the electric telegraph in 1837 triggered the development of standards in the field of electrical communication technology. In 1865 the International Telegraph Union – to become the International Telecommunication Union (ITU) in 1932 – was founded by twenty states. The other major international standards setting body, the ‘International Organization for Standardization’ (ISO) was established in 1947. These days, a web of Standards Developing Organizations (SDOs), i.e., the likes of ISO and ITU at the global level, the ‘European Telecommunications Standards Institute’ (ETSI) at the regional level, and the ‘American National Standards Institute’ (ANSI) and the ‘British Standards Institution’ (BSI) at the national level issue what is commonly referred to as ‘de-jure’ standards – although none of their standards have any regulatory power. Likewise, a plethora of industry fora and consortia (a recent survey found more than 250; [ISSS, 2008]), such as, the ‘World Wide Web Consortium’ (W3C) and the Open Group (an industry consortium to set open standards for computing infrastructure), to name but two of the longer standing ones, produce socalled ‘de-facto’ standards. Terminology ‘Standard‘ and ‘standardization‘ are tricky terms. They are even trickier when it comes to ICT . Think about it for a minute – what exactly establishes a ‘standard‘? Are only specifications issued by of one of the ‘official‘ SDOs standards? Does it suffice if such an SDO just rubberstamps a specification developed by a third party? Or is the degree of usage of a system or a product the decisive factor; is, for instance, MS-Word a ‘standard‘, or SAP/R3? Do industry consortia actually issue ‘standards‘? And what about the Internet – are those Requests for Comments (RFCs) that have been published in the Internet Engineering Task Force’s (IETF) standards-series actually ‘standards’? Ask any three people and the odds are that they will come up with at least four different opinions. As does the literature. For instance, Webster‘s New Universal Unabridged Dictionary defines a standard as 3

“An authoritative principle or rule that usually implies a model or pattern for guidance, by comparison with which the quantity, excellence, correctness etc. of other things may be determined”. The Oxford English Dictionary says a standard is “The authorized exemplar of a unit of measure or weight; e.g. a measuring rod of unit length; a vessel of unit capacity, preserved in the custody of public officers as a permanent evidence of the legally prescribed magnitude of the unit”. These definitions already hint at a major dilemma in the theory of standardization: there is no generally agreed upon definition of what constitutes a standard, and the definitions that do exist cannot be meaningfully applied.

3

Adapted from [Jakobs, 2006].

5


Journal of Virtual Worlds Research- Real Standards for VWs 6

The definition adopted by ISO says that a standard is a document, “… established by consensus and approved by a recognized body, that provides, for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context.” Similarly, for the European Commission (EC) a standard is defined as: “a technical specification approved by a recognized standards body for repeated or continuous application, compliance with which is not compulsory.” The latter two definitions restrict what is colloquially referred to as a standard to those issued by ‘recognized bodies‘. However, what exactly characterizes such a ‘recognized body‘ remains unclear. In Europe, ‘recognized body’ typically still means an SDO, as opposed to a standards consortium4. On the other hand, findings reported in [Jakobs, 2007] suggest that firms do net really care about the nature of the origin of a standard (i.e., whether it was specified by an SDO or a consortium). They are more interested in an SSB’s characteristics, e.g., its membership and IPR regulations. Likewise, the term ‘open standard’, albeit widely used, has not yet been clearly defined. It therefore holds competing connotations for different actors. For the Open Source community, for example, ‘open’ basically means ‘free licensing’. I.e., it refers to the final product, whereas in this paper ‘open’ refers to the standards development process. An open standard means that those involved deliberately set about to codify the standard as non-proprietary knowledge. In effect, no individual commercial interests control the resulting products, and in fact the open standard is made accessible and usable to all interested parties on reasonable and equal terms, even if proprietary technologies are incorporated. The ICT Standard Board (ICTSB) is “an initiative from the three recognised European standards organisations and specification providers to co-ordinate specification activities in the field of Information and Communications Technologies”. Its definition of what constitutes an ‘open’ standard is perhaps the most comprehensive and useful one [ICTSB, 2005]: • • • • • •

developed and/or affirmed in a transparent process open to all relevant players, including industry, consumers and regulatory authorities, etc; either free of IPR concerns, or licensable on a (fair), reasonable and non-discriminatory ((F)RAND) basis; driven by stakeholders, and user requirements must be fully reflected; publicly available (but not necessarily free of charge); maintained.

The ICT Standardization Universe Today Standardization is basically a mechanism for co-ordination (Werle, 2001). Not unlike the research sector, standards setting serves as a platform for co-operation between companies that

4

Things seem to be moving, though; see [EC, 2009].

6


Journal of Virtual Worlds Research- Real Standards for VWs 7

are otherwise competitors5. According to Werle, an organization has different options concerning standards setting: • • •

To try and bypass organized standardization and set a de facto standard. To participate in the work of an official or a private standards organization. To set up a new consortium or forum which deals with the standards project. Assuming that standards-setting work will eventually commence, interests of the various stakeholders are likely to differ. That is, each participating organization may try to either push its own ideas, propose a ‘neutral’ solution, or just try to impede the whole process in order to prevent any standard in the field in question. According to Besen (1995), four distinct situations are possible: • •

Common interests There are no competing proposals, and a decision can quickly be reached by consensus. All parties involved attempt to serve the common good. Opposed interests Each opponent prefers his own proposal to be adopted, but would prefer no standard at all to the adoption of a competitor’s proposal. This situation arises when the gains associated with the winning proposal are comparably big compared to the gains of the industry as a whole. Overlapping interests Again, each opponent prefers his own proposal to be adopted, but would rather have a competitor’s proposal adopted than have no standard at all. This may happen if, conversely to the situation outlined above, the whole industry stands to benefit the most from the adoption of a standard (regardless of that standard’s origin) rather than the original proposer. Destructive interest At least one player prefers not to have any openly available standard at all, and accordingly tries to slow down the process. This player typically is a major vendor largely dominating the market with a proprietary product who would lose market shares if a standard were in place.

Obviously, the above alternatives all lead to the question of competition vs co-operation. The path towards competition may eventually lead to a company’s dominating market position with a product or service based on their own proprietary specification. Yet, at the same time the virtual absence of other players may render this particular market insignificant. On the other hand, co-operation may establish a broader market for products or services. As has for instance been shown in Swann (1990), a product that succeeds in creating an environment in which other vendors consider it beneficial to produce compatible products will prove considerably more successful than its competitors. Such compatible products can only emerge if the underlying original specifications have been made public, or if a very liberal licensing policy has been pursued. This example serves to highlight potential benefits to be gained from open specifications, even if the product itself is inferior to its (less open) rivals in terms of functionality provided. Here, the range of products compatible to the original specification strengthen its status as a de-facto ‘standard’, which in turn triggers the development of even more compliant products (also in Swann, 1990). As a result, a bigger market has been established, leading to increased revenues.

5

Adapted from [Jakobs, 2008].

7


Journal of Virtual Worlds Research- Real Standards for VWs 8

The emergence of diversity Over the last three decades, the world of ICT standardization has changed dramatically from the fairly simple, straightforward, and static situation that could be found in the seventies. Back then, there was a clear distinction between the then ‘monopolist’ CCITT (the International Telegraph and Telephone Consultative Committee, the predecessor of the ITU-T) on the one hand, and the world of IT standards on the other. CCITT were in charge of standards setting in the telecommunications sector. They wer basically run by the national 1 Post, Telephone and Telegraph administrations (PTTs) that were enjoying a monopoly situation in their respective countries. ISO was in charge of almost all other ICT-related standardization activities6. The various national SDOs developed their own specific standards, but also contributed to the work of ISO.

Figure 1: The ICT standardization universe in the 1970’s (excerpt)

Throughout the past 20 or so years, five trends contributed to this increasingly complex ICT standardization environment: •

6

The growing importance and development speed of the ICT sector. Increasing importance of ICT implied an (almost) equally increasing commercial importance of the underlying standards. This made standards setting appear more lucrative especially for large manufacturers. And those who were supporting a loosing proposal, or were dissatisfied with the progress and/or the pace of the standardization process, dropped out and formed their own standards consortium to standardize their technology. Many such consortia were formed only to lend some extra credibility to the resulting standards, which would otherwise have been proprietary specifications. The commercial exploitation of the Internet since the mid-nineties (especially the WWW). Until the late eighties / early nineties the Internet was little more than an academic network, with few nodes outside the US. The advent of the WWW, and the subsequent emergence of Internet-based e-business applications, lead to the foundation of a number of new consortia (most notably the World Wide Web Consortium W3C).

Some related activities were also carried out within IEC, the International Electrotechnical Commission.

8


Journal of Virtual Worlds Research- Real Standards for VWs 9

The globalization of markets. This triggered the need for global interoperability, typically achieved through internationally accepted standards. The potentially global customer base further re-enforced the commercial interest in ICT standards, and particularly in standards consortia. The liberalization of the telecommunications markets. One outcome of this process was the emergence of regional bodies telecommunication standards bodies, such as ETSI in Europe, and ATIS7 in the US and TTC8 in Asia. The additional co-ordination efforts required between these bodies led to the foundation of the Global Standards Collaboration (GSC; see also Figure 2). The still ongoing convergence of the formerly distinct sectors of telecommunications and IT. Until the late eighties / early nineties, telecommunication on the one hand, and IT on the other, were rather separated fields. Standards for the former were developed by the CCITT, those for the latter primarily by the ISO/IEC Joint Technical Committee 1 (JTC1). The merger implies that standards of the respective other field are becoming more and more important, resulting in an increasing need for co-ordination.

Figure 2: The ICT standardization universe today (excerpt)

Co-ordination between Standards Setting Bodies The increasingly complex web of SSBs, in conjunction with the equally increasing interdependencies between different ICT systems, and between applications and ICT infrastructure, imply an urgent need for co-operation and distribution of labour between the SSBs active in ICT

7 8

Alliance for Telecommunications Industry Solutions. Telecommunication Technology Committee.

9


Journal of Virtual Worlds Research- Real Standards for VWs 10

standardization. This has also been recognized by the European Commission who observe that ‘... consortia and fora are playing an increasing role in the development of standards…” (EC 2004). Today, various forms of co-operation between SSBs may be found. In the realm of SDOs, ‘horizontal’ co-operation between the international SDOs (ITU, ISO, IEC) is regulated by a dedicated guide for co-operation between ITU-T and JTC1 (ITU, 2001). However, the document also makes it very plain that “By far, the vast majority of the work program of the ITU-T and the work program of JTC 1 is carried out separately with little, if any, need for cooperation between the organizations”. Similarly, the CEN/CENELEC/ETSI Joint Presidents' Group (JPG) co-ordinates the standardization policies of the ESOs based on a basic co-operation agreement (CEN, 2001). Moreover, Directive 98/34/EC (EC, 1998) mandates that conflicting standards have to be withdrawn. This is managed internally by each ESO, between the three bodies (through crossrepresentation at General Assemblies and co-ordination bodies), and ‘vertically’ with their members, the NSOs. ‘Vertical’ co-operation between ESOs and the international bodies is governed by individual documents. Here, the major need for co-operation and co-ordination is primarily sector-specific. The ‘Vienna Agreement’ (ISO, 2001) provides the rules for co-operation between CEN and ISO; analogously, the ‘Dresden Agreement’9 governs relations between IEC and CENELEC. Somewhat surprisingly, only a rather more informal Memorandum of Understanding (MoU) exists for the co-operation between ETSI and ITU10. On the other hand, and also a bit unexpected, a dedicated agreement guides the relations between ETSI and IEC11. In general, the ‘vertical’ agreements and MoUs (i.e., those between ESOs and the international bodies) define various levels of co-operation and co-ordination, albeit in comparably vague terms. Nonetheless, co-operation between CEN and ISO, and CENELEC and IEC, has been very successful in many cases, primarily through joint working groups and colocated meetings. In contrast, the documents governing the respective ‘horizontal’ co-operations, are far more rigorous. This holds particularly for the European Directive that regulates the relations between the three ESOs. Figure 3 depicts a summary of the existing formal relations between the international and the European SDOs.

9

http://www.iec.ch/about/partners/agreements/cenelec-e.htm. http://www.itu.int/ITU-T/tsb-director/mou/mou_itu_etsi.html. 11 http://www.iec.ch/about/partners/agreements/etsi-e.htm. 10

10


Journal of Virtual Worlds Research- Real Standards for VWs 11

Figure 3: Co-operation and co-ordination agreements between European and international SDOs

ETSI Partnership Projects12 represent a different approach to co-ordination. Covering both SDOs and consortia, such projects co-ordinate a group of regional SDOs and industry consortia working towards a common objective. The ‘3rd Generation Partnership Project’ (3GPP) is the most prominent example. In the e-business sector, a specific MoU [ITU, 2000] exists between ISO, IEC (the ‘parent’ organizations of JTC1), ITU, and UN/ECE13. In addition, a number of organizations have been recognized as participating international user groups. The objective of the MoU is to encourage interoperability. To this end, it aims to minimize the risk of conflicting approaches to standardization, to avoid duplication of efforts, to provide a clear roadmap for users, and to ensure inter-sectoral coherence. Most notably, its ‘division of responsibilities’ identifies a number of key tasks and assigns a lead organization (one of the four signatories) to each of them. Overall, the co-ordination of the work of the SDOs appears to be reasonably well organized14. This does not necessarily hold for the co-ordination between SDOs (and ESOs in particular) and standards consortia. Numerous co-operations do exist, however, the current situation can be best described as piecemeal; there is no overarching framework to organize the individual co-operations. An initiative taken by the three ESOs is another promising development. The ICT Standards Board (ICTSB) aims to co-ordinate specification activities in the field of ICT. In addition to the ESOs, the ICTSB membership comprises major standards consortia active in the e-business domain (including, for example, ECBS (the European Committee for Banking Standards), ECMA International (Standardizing Information and Communication Systems), OASIS, the Object Management Group, RosettaNet, The Open Group, and the World Wide Web

12

"Where appropriate, ETSI will base its activities on Partnership Projects committed to basic principles such as openness, clear Intellectual Property Rights (IPR) policy and financial co-responsibility, to be established with partners of any kind (global and regional, Standards Development Organizations (SDOs) and Fora, etc.)" http://www.etsi.org/etsi_galaxy/worldwide/partnership/partnership_a.htm. 13 The United Nations Economic Commission for Europe. 14 There have been exceptions, though, which need to be avoided in the future. For example, the IEEE 802.11a/b/g activities and ETSI’s HIPERLAN/2 covered the same ground and were in direct competition (ETSI ‘lost’).

11


Journal of Virtual Worlds Research- Real Standards for VWs 12

Consortium. Its approach is quite similar to the one adopted by the MoU on e-business standardization, albeit broader in scope. Another relevant co-ordination mechanism is that of ‘Publicly Available Specifications’ (PAS). The ISO directives state that “... constitutional characteristics of the [PAS-submitting] organization are supposed to reflect the openness of the organization and the PAS development process” (JTC1, 2004). The PAS procedure is a means for JTC1 to transpose a specification more rapidly into an international standard. The specification starts out as a Draft International Standard (DIS), which, if approved by JTC1 members, immediately acquires the status of an International Standard (IS) (Egyedi, 2000). This mechanism has primarily been designed to enable JTC1 to transpose specifications that originated from consortia into international standards. In this capacity it also serves as a mechanism to at least contribute to co-ordination of work done within consortia and the world of formal SDOs. With respect to the co-ordination between individual consortia the situation is even worse. Here as well co-operations occur rather more at the level of working groups (if at all) than at SSB level. In most cases, however, the world of standards consortia experiences more competition than co-operation. There is direct competition between consortia covering similar ground, for instance, between RosettaNet and ebXML, and between the Semantic Web Services Initiative (SWSI) and the W3C.

Categorising SSBs and Standards Users15 The high complexity of the ICT standardization landscape implies that organizations wishing to become active in standards setting (for whichever reason) need to consider their options very carefully. For one, pros and cons of joining the standardization bandwagon vs trying to push a proprietary solution need to be taken into account. Standards based products or services may imply price wars and lower revenues, but may also open new markets and widen the customer base. Offering a proprietary solution may yield (or keep, rather) a loyal customer base, but may also result in a technological lock-in and, eventually, marginalization. Once having decided to go for a standard, a firm normally wants to make sure that the ‘right’ standard emerges. Yet, what exactly characterizes the ‘right’, or at least a ‘good’ standard is far from being clear. Indeed, different companies may well have very different views here, largely depending on factors such as, e.g., their respective own technological base, corporate strategies, business models, etc. These determine the level of involvement in standards setting (an organization wishing to create a new market in a certain domain is likely to adopt a different approach to standards setting than a company which only needs to gather advance intelligence for its business), and also the best platform for doing so (that is, the selected standards setting body’s characteristics should be compatible with the company’s goals). Standardization may thus be seen as an interface between technical and non-technical (e.g. economic, organizational and even social) factors. Standards are not only rooted in technical deliberations, but also result from a process of social interactions between the stakeholders and also, probably most notably, reflect the economic interests of the major players.

15

Adapted from [Jakobs, 2007].

12


Journal of Virtual Worlds Research- Real Standards for VWs 13

Categorising SSBs SSBs can be categorized according to very different criteria. The most popular, albeit not particularly helpful distinction is between formal SDOs and consortia. Typically, the former are said to be slow, compromise-laden, and in most cases not able to deliver on time what the market really needs. In fact, originally the formation of consortia was seen as one way of avoiding the allegedly cumbersome processes of the SDOs, and to deliver much needed standards on time and on budget. Consortia have been widely perceived as being more adaptable to a changing environment, able to enlist highly motivated and thus effective staff, and to have leaner and more efficient processes. Accordingly, attributes associated with SDOs include, for example, ‘slow’, ‘consensus’, and ‘compromise-laden’, consortia are typically associated with ‘speed’, ‘short time to market’, and ‘meets real market needs’. However, it is safe to say that this classification, including the over-simplifying associated attributes, are not particularly helpful for organizations who want to get a better idea of what the market for standards has to offer. This holds all the more as an organization’s requirements on an SSB very much depend on a combination of factors specific to this particular organization. Accordingly, a more flexible approach towards classification was adopted. Rather than pre-defining certain categories, a set of attributes has been identified that can be applied to describe SSBs. This description can than be matched onto an organization’s requirements on SSBs, thus allowing companies to identify those SSBs that best meet their specific needs. Thus, the attributes for the description of a Standards Setting Body fall into four categories (adapted from, and based upon, [Updegrove, 2005]): • General • Membership • Standards setting process • Output The attributes associated with each of these categories will be discussed below. ‘General’ Attributes These attributes serve to provide some high-level information about the working environment an SSB has defined for itself. The form of governance chosen, for instance, provides information about which body, and who, is making the ultimate decisions, which in turn may help reveal the level of transparency in the SSB’s decision making process. This is also of interest to those who wish to exert a certain level of influence. Finance and staffing are important for an evaluation of an SSB’s ability to survive. These are also valuable indicators for the commitment of the SSB’s (leading) members – if they are prepared to invest (heavily) into its activities they are also likely to try and make sure that the objectives are met.

13


Journal of Virtual Worlds Research- Real Standards for VWs 14

The IPR (Intellectual Property Rights) policy adopted may have significant impact on the attractiveness of an SSB to holders of relevant IPR. An SSB needs to find a reasonable balance here – the policy must neither deter IPR holders, who may be afraid of losing valuable assets, nor potential users, who may be afraid of implementing a standard with high licensing fees attached to it. Thus, this policy may also have implications on the level of openness envisaged by the SSB. The latter also holds for the number and types of an SSB’s liaisons. They are a good indicator of an SSB’s openness towards relevant work done elsewhere. Moreover, liaisons are one means of co-ordination (see above), thus at least somewhat reducing the risk of standardising on a technology that is at odds with other standards. The level of competition an SSB faces indicates one aspect of the risk to be associated with going for its standards, with a high level suggesting a high risk of eventually being stranded with a loosing technology. Conversely, a ‘monopoly’ situation may indicate a reasonably safe bet. Along similar lines, a good reputation of an SSB (albeit possibly somewhat hard to quantify) may suggest higher chances of its output to succeed in the market (see chapter 5 for a more detailed discussion relating to this aspect). ‘Membership’ Attributes Information on the membership base of an SSB is relevant with respect to the level of its openness, and its decision making process (both formal and informal). A small number of handpicked members, for instance, or membership levels with very different associated fees and rights suggest the idea of a rather more closed group of decision-makers (possibly despite a huge overall membership base). Likewise, it may reveal an SSB’s support of the needs of a specific clientele (e.g., large manufacturers). The overall number of members serves as a very rough first indication of the success factors of an SSB’s output. A broad membership base may provide valuable support for a standard. More important than the number of members, however, is the ‘quality’ of the membership. That is, an SSB’s chances of being successful in the market are much better if large potential users and major vendors/manufacturers or service providers are among its members, and thus likely to support its output. In addition, the level of membership of these companies is of interest – it indicates whether they are only interested in e.g., intelligence gathering, or if they want to play an active role in the standardization process, and in the SSB in general. Who is actually working actively in an SSB is probably even more important. A company’s active participation in an SSB’s standards setting process is a very good indicator of this company’s support of the SSB’s standards setting activities.

14


Journal of Virtual Worlds Research- Real Standards for VWs 15

Finally, the individual member representatives may be supposed to act as corporate representatives, or in an individual capacity. In the latter case the points listed above may become slightly less relevant, as it is not necessarily ensured that WG members actually represent the corporate goals of their respective employers. ‘Standards Setting Process’ Attributes An SSB’s standards setting process not only reflects its ability to quickly adapt to a changing environment and newly emerging requirements, to meet a window of opportunity, or to support real-world implementations. It also shows the level of ‘democracy’ considered desirable by the SSB, and again, whether or not certain stakeholders are more equal than others. A high a level of ‘democracy’, in turn, may be attractive for some stakeholders, but a deterrent for others. ‘Time’ is a crucial factor for many standards setting initiatives. That is, on most cases standardization should be at least in sync with the technical development, maybe even ahead of it. This does not necessarily hold for infrastructural technologies (such as, e.g., ISDN), where getting everything right the first time is more important than speed (see [Sherif, 2003]). In any case, lagging behind for too long will make a standard irrelevant for most purposes. In fact, ‘shorter time to market’ has always been one of the major arguments in favour of consortia. Also, meeting a window of opportunity is a crucial success factor for a potential standard. Accordingly, the time it takes from submission of a proposal to form a working group to address a specific topic until the final acceptance of the standard is an important factor. This time span, in turn, comprises three elements: • • •

the time it takes to establish a working group, the time it takes this WG to do the work, and the time for the final ballot.

Obviously, this depends very much on, for example, the level of consensus sought, and on the decision mechanisms adopted by the respective SSB. That is, there are other aspects of an SSB’s standards setting process that may be of interest to potential proposers, which may have a negative impact on a process’ duration, and which need to be addressed as well. Particularly, these include the degree of openness of a standards setting process, its transparency, the required level of consensus, and the observation of due process. Basically, these attributes describe the level of ‘democracy’ observed by a standards setting process. Are the elements of the process, the decisions taken, and the reasons for these decisions well documented and available? Does everyone have the right to speak, and to be listened to? Is there a way to appeal against a decision, and how does it work? Which level of consensus is required (e.g., at working group level, at membership level)? In many cases, it will be necessary to balance the requirement for speed and the need for a broad consensus. In many instances a standards setting process should not stop once a standard has been described on paper. Other aspects may at least be as important as a base standard. Most prominently, these include the availability of interoperable implementations of a standard, and 15


Journal of Virtual Worlds Research- Real Standards for VWs 16

proof of an implementation’s conformance with the standard. Whether or not an SSB’s process requires the former, or if the SSB provides for the latter, may well be important aspects to be considered. ‘Output’ Attributes Finally, the types of deliverables produced also give an indication about an SSB’s flexibility. For instance, full-blown formal standards indicate a lengthy, democratic, consensusbased process, whereas technical reports or similar types of deliverables suggest a faster, more adaptable process with a lower level of consensus. Information about the number of implementations shows the relative ‘importance’ of an SSB, as does, to a certain level, the fact that it is accepted PAS submitter to ISO. The latter also indicates an SSB’s willingness to meet the associated requirements on its process. A standard that is maintained, and possibly developed, over time suggests that it is envisaged to be long-lived, and also says something about the SSB’s willingness to adapt its deliverables to changing environments. In order to improve a standard’s chances of success in the market it will help if it originated from a well accepted source. The number of implementations of other standards from an SSB may serve as one indicator of this SSB’s ‘credibility’. Also, the free availability of a standard’s specification may help disseminate it more widely. In some instances, especially for a more long-term planning, it may be of interest whether of not an SSB maintains its standards, or whether it has adopted a ‘fire and forget’ approach. A standard’s maintenance will need to cover, for example, the addition of technical corrigenda, of addenda covering additional functionality, and maybe eventually the release of a follow-up version of a standard. In each of these cases, backward-compatibility has to be ensured. A wellmanaged maintenance process is extremely helpful if longevity and adaptability of a standard are or concern. Along similar lines – an SSB should make sure that a new standard does not contradict other, established ones. At the least it should have a mechanism in place to ensure consistency of its own standards, ideally this should extend to all standards (although this will be next to impossible to achieve). Last, but not least, an SSB might want to consider the impact a standard might have. While hard to do, this might be a worthwhile exercise that may well safe serious money which might otherwise be wasted on a standard with little or no chances of success in the market. Categorising Users of Standards For a classification of standards users we need to look at their respective motivations for an active participation in the standards setting process. In most cases the respective level of interest of companies wishing to get involved in a new standards setting activity will differ widely. For some, the nature of a standard, or even the fact that a new standard will materialize, may be a matter of life or death. For others, an emerging new standard may be of rather more academic interest.

16


Journal of Virtual Worlds Research- Real Standards for VWs 17

Still at a fairly general level, prospective participants in a standardization activity may be subdivided into three categories – ‘Leader’, ‘Adopter’, and ‘Observer’, respectively16. The motivation to actively participate in standards setting, and for joining – or maybe even establishing – an SSB will be very different for members of each individual category, and may be summarized as follows: •

Leaders These are companies for which participation in a certain standards-setting activity is critical. They may even create a new consortium to establish a platform for the standardization work they consider crucial. They are prepared to make a large investment in such an activity. For these companies, the strategic price of not participating in a given standards effort can far outweigh its costs. ‘Leaders’ aim to control the strategy and direction of a consortium, rather than to merely participate in its activities. Large vendors, manufacturers, and service providers are typical representatives of this class.

Adopters Such companies less interested in influencing strategic direction and goals of the consortium. Adopters are more interested in participation than influence (although they may want to influence individual standards). Large users, SME vendors and manufacturers are typically found here.

Observers Such companies (and individuals) main motivation for participation is intelligence gathering; they don’t want to invest any significant resources in the effort. Typically, this group comprises, for instance, academics, consultants and system integrators.

Leaders When deciding about joining an existing SDO or consortium (the latter preferably as a founding member; in most cases founding members have a greater say concerning the goals and strategies of a consortium), as opposed to founding one, Leaders specifically need to analyse an SSB’s governance – does it provide for the level of influence they want to exercise? Or is a strong group with incompatible goals already well established, and likely to block any new activities? Also, the IPR policy is of crucial importance – with too lenient a policy many important players may be hesitant to join, a too restrictive policy may prevent users from adopting any standards of this SSB. In addition, Leaders will need to carefully analyse several characteristics of an SSB they are considering to join, and match them to their strategic goals. The most important of these characteristics are summarized in Table 1 below.

16

Adapted from, and based upon, [Updegrove, 2005].

17


Journal of Virtual Worlds Research- Real Standards for VWs 18

Table 1: Leaders’ criteria Strategic Goals To create a market

Most important SSB characteristics Governance: Does it provide for strong influence of interested players? Or is it rather more ‘egalitarian’? Finance: Are finances sound? Will the SSB have the stamina to survive the process? Does it depend heavily on individual entities/contributors? IPR policy: Is the IPR policy adequate? Will it eventually put-off users who are afraid of high licensing fees? Will it deter holders of important IPR from joining? Reputation: Is the SSB well respected in the area in question? Related to that – are its standards widely implemented? Competition: Are there competing SSBs? Are competitors likely to emerge, or are all relevant players members? Membership levels: does the highest membership level available guarantee the necessary level of influence? Who else is at this level? Are leading users represented in the ‘upper’ levels? Key players involved?: Who are the active players, and which roles do their representatives assume (individual capacity / company rep)? Are the ‘right’ companies represented? Are all relevant stakeholders represented? Are leading users on board? Are any key players missing? Is the combined market power adequate? Timing: Will I be able to meet a window of opportunity?

To create a (successful) standard Governance: Does it provide for strong influence of interested players? Or is it rather more ‘egalitarian’? Finance: Are finances sound? Will the SSB have the stamina to survive the process? Does it depend heavily on individual entities/contributors? IPR policy: Is the IPR available inside the SSB adequate, or is licensing of thirdparty IPR necessary? Reputation: Is the SSB well respected in the area in question? Membership: Are there potential allies/ opponents? Is adequate technical expertise available, at both corporate and individual level? Key players involved?: Is the combined market power adequate? Are relevant stakeholders represented? Are important stakeholders absent? Timing: How long will it take to develop a standard? Will the window of opportunity be met? Process characteristics: Can the process be used against me; e.g., to delay the standard? For how long? What are the decision mechanisms? Products: Does the SSB offer an appropriate type of deliverable? Dissemination: Will the specifications (and possibly reference implementations) be available for free?

In addition to the ‘positive’ goals identified above, the analogous ‘negative’ goals may also be observed. I.e., to prevent the creation of a new market, or of a successful standard, may also be strategic goals of an organization. In both cases, the considerations concerning the 18


Journal of Virtual Worlds Research- Real Standards for VWs 19

important characteristics of an SSB remain the same. The same applies for the considerations below. Adopters Most companies will be in this category. Their goals will be rather more tactical than strategic. Accordingly, they will rather more aim at technically influencing the actual standard rather than the market, and would like the new standard to be in line with their own developments. In addition, they will want to gather specific intelligence early on, and maybe adopt their developments accordingly. Another motivation for adopters to actively participate in standards setting may be the desire to share development cost by moving part of this work into the standards body (see also Table 2). Given the above goals, companies in this group tend to go for full rights of participation in all technical activities, but may be less interested in influencing the strategic direction of the efforts and goals of the SSB. Table 2: Adopters’ criteria Adopters – Strategic Goals To influence standard development

Most important SSB characteristics Governance: does it provide for strong influence of interested players? Or is it rather more ‘egalitarian’? Membership: Is a level available that provides for adequate influence? Who else is at this level? Who are the ‘active’ members? Key players involved?: Are the important players on board? Who are potential strong opponents or allies? Individuals’ capacity: Do I need to know the individual reps and their views, and the roles they are likely to assume? Required level of consensus: Is it possible to exploit the consensus requirement in order to delay the process or to cripple the outcome?

To share development costs

Membership: Are enough (important) members with similar interests on board, at an adequate membership level (to indicate sufficient interest)?

To gather specific early intelligence

Membership: Is a level available that offers a good RoI; i.e. one that does gives access to all relevant information without costing a fortune

Observers Many companies and individuals will have a need to know what an SSB is working on but will not be interested – or will not have the means – to actively participate in any form. That is, their main interest lies in the gathering of general knowledge (Table 3; important, for instance, for consultants).

19


Journal of Virtual Worlds Research- Real Standards for VWs 20

Table 3: Observes’ criteria

Observers – Strategic Goals To gather general (early) intelligence

Most important SSB characteristics Membership: Is a level available that offers a good RoI; i.e. one that does gives access to all relevant information without costing a fortune?

Summary and Some Final Remarks In both the real and the virtual worlds standards are a sine-qua-non. Yet, for a variety of reasons the standards setting environment in the ICT domain is extremely complex. As a result do those who wish to actively contribute to standards setting have to face the problem of identifying those SSBs that best meet their technical and/or business requirements. An organisation’s decision which SSB(s) to select as the platform of choice for its contributions to the standardisation process needs to be based on a variety of criteria. Some of these criteria will be related to the SSB’s characteristics. Others will be more associated with the potential standards setter’s visions, goals, and business models. This paper has tried to provide some guidelines to those who wish to go that – potentially thorny, and certainly costly and timeconsuming, yet ultimately beneficial – path. On a brighter note, in some instances the process of identifying the most appropriate SSB is fairly straightforward. If you want to develop new protocols or the Internet, the IETF will almost certainly be the SSB of choice. And if you want to set new standards for local area networks, you will in all likelihood contribute to one of the various IEEE 802 working groups. Unfortunately, things are not that simple in the case of standards for Virtual Worlds. Here, the identification of the most suitable SSB is hampered by the fact that currently rather few SSBs are addressing VW-specific problems. ISO is one of the exceptions here; a number of their standards – though not necessarily specifically designed for Virtual Worlds – have been useful for VWs17. Other major SSBs that have developed relevant standards include the IETF and the W3C. My guess is that the development of standards specifically for VWs will require dedicated consortia. While many ‘real-world standards’ may be applied for VWs, some problems (like, for example, teleporting between worlds, ownership of objects, and identity) will required very specific standards without any equivalent in the real world. Perhaps such standards setting activities could – at least partly – be located in a Virtual World.

17 Including, for instance, VRML (ISO/IEC 14772), X3D (ISO/IEC 19775-7), PLIB (ISO/TS 10303-1291), and a number of the MPEG series of standards.

20


Journal of Virtual Worlds Research- Real Standards for VWs 21

Bibliography Besen, F.M. (1995). The standards process in telecommunication and information technology. In: Hawkins, R.W. et al. (eds), Standards, Innovation and Competitiveness, pp. 136-146. Cheltenham, UK; Northampton, US. Edward Elgar Publishers. CEN (2001). Basic co-operation agreement between CEN, CENELEC and ETSI. Retrieved from: http://www.cen.eu/boss/supporting/reference+documents/basic+cooperation+agreement++cen+clc+etsi.asp (retrieved 23 September 2009). EC (1998). Directive 98/34/EC – Procedure for the provision of information in the field of technical standards and regulations. Retrieved from: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:1998:204:0037:0048:EN:PDF (retrieved 23 September 2009). EC (2004). The role of European standardisation in the framework of European policies and legislation. Communication from the Commission to the European Parliament and the Council, COM(2004) 674. Retrieved from: http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2004:0674:FIN:EN:PDF (retrieved 23 September 2009). EC (2009). Modernising ICT Standardisation in the EU - The Way Forward. COM(2009) 324 final. Retrieved from: http://ec.europa.eu/enterprise/sectors/ict/files/whitepaper_en.pdf (retrieved 23 September 2009). Egyedi, T. (2000). Compatibility strategies of licensing, open sourcing and standardisation: The case of Java. Proc. 5th Helsinki Workshop on Standardization and Networks. Retrieved from: http://www.vatt.fi/file/vatt_publication_pdf/k243.pdf (retrieved 23 September 2009). ICTSB (2005): Critical issues in ICT Standardization. Retrieved from: http://www.ictsb.org/Working_Groups/ICTSFG/Documents/ICTSFG_report_2005-0427.pdf (retrieved 23 September 2009). ISO (2001): Agreement on Technical Co-operation Between ISO and CEN (Vienna Agreement). Retrieved from: http://publicaa.ansi.org/sites/apdl/Documents/.../ISOCEN%20VA.pdf (retrieved 23 September 2009). Jakobs, K. (2006). ICT Standards Research – Quo Vadis? (invited paper). Homo Oeconomicus 23(1), pp. 79-107. Jakobs, K. (2007). ICT Standards Development – Finding the Best Platform. In: Proc. Interoperability for Enterprise Software and Applications, I-ESA’06, pp. 543-552. Berlin, Heidelberg, Germany Springer Publishers. Jakobs, K. (2008): ICT Standardisation – Co-ordinating the Diversity. In: Proc. ‘Innovations in NGN – Future Network and Services. An ITU-T Kaleidoscope Event’, pp. 119-126. Piscataway, NJ, USA. IEEE Press. Jakobs, K. (2009): Perceived Relation Between ICT Standards’ Sources and Their Success in the Market. In: Jakobs, K. (ed): Information and Communication Technology Standardization for E-Business: Integrating Supply and Demand Factors, pp. 65.80. Hershey, PA. IGI Global. 21


Journal of Virtual Worlds Research- Real Standards for VWs 22

JTC1 (2004) (eds): ISO/IEC Directives, Procedures for the technical work of ISO/IEC JTC 1 on Information Technology (5th Edition). Retrieved from: http://isotc.iso.org/livelink/livelink/3959538/Jtc1_Directives.pdf?func=doc.Fetch&nodeid= 3959538 (retrieved 28 September 2009). Sherif, M.H. (2003): When Is Standardization Slow? International Journal on IT Standards and Standardization Research, vol 1, no. 1, pp 19-32. Swann, P. (1990): Standards and the Growth of a Software Network. In: Berg, J.L.; Schummy, H. (eds): An Analysis of the Information Technology Standardization Process, pp.383-394. Amsterdam, New York, Oxford, Tokyo. North-Holland. Updegrove, A. (2005). Evaluating Whether to Join a Consortium. Retrieved from http://www.gesmer.com/publications/article.php?ID=11 (retrieved 28 September 2009).. Werle R. (2001). Institutional aspects of standardization: Jurisdictional conflicts and the choice of standardization organizations. Journal of European Public Policy, vol. 8, no. 3, pp. 392 â&#x20AC;&#x201C; 410.

22


Volume 2, Number 3 Technology, Economy, and Standards October 2009

Virtual World Interoperability: Let Use Cases Drive Design By Jon Watte, Forterra Systems

Abstract This article examines the history of virtual world interoperability as evidenced through early systems like DIS and HLA, current systems such as Second Life / OpenSim teleport and OLIVE simulation interoperability, and the future, interconnected metaverse. The article argues that â&#x20AC;&#x153;seriousâ&#x20AC;? virtual worlds will be the initial market that drives true virtual world interoperability because of its particular needs. Based on this claim, a comprehensive approach to standards-based virtual world interoperability is described.

Keywords: virtual worlds; interoperability; metaverse.

This work is copyrighted under the Creative Commons Attribution-No Derivative Works 3.0 United States License by the Journal of Virtual Worlds Research.


Journal of Virtual Worlds Research- VW Interoperability 4

Virtual World Interoperability: Let Use Cases Drive Design By Jon Watte, Forterra Systems

Virtual worlds are slowly creeping into our daily lives. While some early adopters have been using them for entertainment, research and training over the last 20 years, virtual trade shows and online conferencing with user avatars are putting them front and center on the desktops of workers around the world. However, while a "walled garden" virtual world may be useful in and of itself (just like a cell phone being able to call other cell phone customers using only the same carrier), the real usability explosion will come when the different virtual worlds start talking to each other, just like cell phones being able to call any phone number in the world, no matter who the destination carrier or operator is. In fact, telephony has grown into a large infrastructure used for conference calls, IP telephony, telefax, and even video calls in some parts of the world. This growth could not have happened without interoperability between systems, operators and technologies, where that interoperability allowed the main feature of the telephone (carrying a band-limited signal between two endpoints) to spread everywhere. By comparison, interoperability between virtual worlds, where such interoperability would only allow, for example, the same virtual currency to be used in different places, would not enable the same level of widespread use; the meat of a virtual world is its ability to support spatially based interaction between users, and between users and the simulated world. Virtual Worlds in Context It is important to be clear on the context within which a given argument is made. Without understanding and making clear the underlying assumptions and history of an argument, the argument can easily be misunderstood or simply not appear relevant. In order to mitigate that problem, I will describe the context of virtual worlds used in this article. I will start by narrowing down the definition of the kind of virtual world I want to discuss. In discussion, social web sites, such as Facebook or LinkedIn, can arguably be classified as virtual worlds. After all, they provide interactivity, a meeting place for users, persistence of user-initiated changes and a rule set under which interactions are made â&#x20AC;&#x201C; all of which are traits seen in most virtual worlds in use today. However, I argue that broadening the definition of virtual worlds to include 2-D web sites like Facebook is not meaningful, because the mode of interaction is very different from a 3-D virtual world like AlphaWorld, Project Entropia, or There.com. Any attempt to find commonality between these worlds will fall back to simple, web-based, transactional interactions, for which standards already exist or at least are emerging (technologies from EDI to OpenID to SOAP falls in this category). Instead, to separate virtual worlds from web-based social spaces, I will focus on virtual worlds that include real-time, 3-D, physically based interaction between users. As any virtual world user will tell you, the real-time, 3-D, physically simulation-based interactivity is a major part of what makes a virtual world special. Human beings have evolved to have acute spatial awareness, and relate to objects in the environment in 3-D. VR researchi from the 1980s and 1990s show that a physically based 3-D virtual world draws upon this awareness in a way that flat services cannot, and thus deliver more immersion and a sense of presence. 4


Journal of Virtual Worlds Research- VW Interoperability 5

Another distinguishing factor of virtual worlds is the ability of users to modify the environment in a persistent way. Unlike 3-D games, like World of Warcraft, Counterstrike, or EVE Online, a virtual world allows users to make permanent modifications to the environment and objects in the world and generally to introduce artifacts that change the simulation of the world more or less permanently. For example, in Second Life, a user can create a new object and attach a script that flings any user that stands on the object into the air – in effect introducing a user catapult. Because of this, the main attraction of a virtual world for entertainment is the content that the users can create themselves – be it a virtual mansion, night club, or Rube Goldberg-style contraption. By contrast, a 3-D game, even though it may feature thousands of users in a physically simulated 3-D world, does not generally allow persistent modification of the world by players. Within this paper, I will separate two sub-categories of virtual worlds, the usage of which differs sufficiently to warrant such separation. In an “entertainment” virtual world, users attend the world in order to enjoy themselves. The entertainment virtual world is a destination or mode for an experience, much like a movie theater is a destination for an experience, or a phone call to a friend is a mode for an experience. Meanwhile, “serious” virtual worlds are made to achieve specific goals related to training, education, collaboration, or other day-to-day work-based interaction. In this case, it is not the experience that is the main take-away; it is the outcome of the collaboration (lessons learned, meeting deliverables, etc). From a market point of view, an entertainment virtual world may compete with a real-life bar or night club, or perhaps watching TV, whereas a “serious” virtual world competes with a classroom, a conference call, or an inperson meeting. One formulation of the difference between 3-D virtual worlds and other online interactive or semi-interactive technologies is the concept of “3D3C,” although in that formulation, the third “C” (in-world Commerce) is more a requirement for current entertainment worlds than for current business worlds because of the different usage modes. The two other “C”s map well to both entertainment worlds and serious worlds, where Community is the users who interact, and Creation is the interaction with the environment, and the actual work being done. Previous Virtual World Interoperability To better understand where we want to go, it is useful to understand where we’ve been and what we’ve learned so far. This necessitates a brief overview of the capabilities and technologies used for virtual world interoperability so far. For the past five years, my work at Forterra Systems has involved interoperating between our enterprise virtual world platform OLIVE and a number of other systems. At the same time, our licensee Makena Technologies operates the entertainment virtual world There.com, giving us a good view of the needs and desires of entertainment users and operators. Based on this experience, as well as following the market in general, I have concluded that entertainment virtual worlds do not have a huge demand for interoperability from the end users. This is important, because in the end, if there are no users willing to drive and fund interoperability work, then such work is unlikely to be successful. To put it another way: When asked “how much would you pay to be able to teleport from There.com to Second Life and back again, without switching client applications,” the overwhelming majority of users would answer “not much.” 5


Journal of Virtual Worlds Research- VW Interoperability 6

By contrast, all of the enterprise virtual world integrations we have made so far have incorporated some form of interoperability. That interoperability may be simple, such as authenticating users to an existing LDAP database, or providing the ability to call into and out of the public telephone network (typically using a SIP gateway), or complex, such as the ability to plug in a third party physiology model to simulate the health of avatars when running exercises for medical training. When enterprise customers are asked how much they are willing to pay for interoperability, the answer is generally “it’s a crucial requirement.” From this experience, I have learned that the main area of interoperability need that is underserved for virtual worlds is the interoperability of entities, where “entities” are defined as objects that generate forces or interactions in the world – avatars, vehicles, communications equipment, etc. By contrast, non-entity objects in the world would be “dumb” objects, such as rocks, trees, buildings, and others. While a rock may fall and tumble based on gravity and collision, it does not introduce any behaviors of its own into the world. For entity interoperability, we have had great success using the DIS protocol (IEEE 1278ii). This protocol grew out of the work that the United States Department of Defense (DOD) did in the 1970s and 1980s with regards to military simulation interoperability. The need, at the time, was to couple different simulators (for systems like army tanks, airplanes, ships, and satellites) together, so that the operator of a flight simulator could see friendly and enemy tanks on the ground, and even interact with them (mainly through weapons systems and sensors). In this model, each simulated system (each individual tank or airplane) was its own simulator, receiving telemetry from all the other simulators, and using dead reckoning to interpolate the position of those entities between updates. With dead reckoning, periodic updates of the state of an entity are forward extrapolated by the receiving end to calculate how an object is likely to evolve over time. For example, if I know that you were at a certain position 100 milliseconds ago, and I knew your velocity at that time, I can make a pretty good guess at what your position is now by adding that velocity, times 100 milliseconds, to the old position. Dead reckoning allows objects to be displayed in a consistent time frame of reference, but instead trades off accuracy – from the time you make a turn, until the time that a network message gets to me telling me you made that turn, I will still assume that you are moving forward. The alternative is to display objects using past state, and only update the state of objects as new updates are received. In highly kinetic activities, such time delay may be much less desirable than the spatial inaccuracy introduced by the “guessing” of dead reckoningiii. As technology progressed and computer capability increased, a kind of system known as Semi-Autonomous Forces (SAF) gained in prominence. This kind of system uses algorithms to simulate the behavior of entities of various scales, from an individual dismounted soldier, through platforms like vehicles and ships, all the way up to aggregate entities like battalions. DIS was modified to support the introduction of SAF into a simulation, so that some of the entities would be driven by user-operated simulators, and other entities would exist only as virtual entities inside the SAF constructive simulation. At the same time, real-time telemetry, made possible through better instrumentation, GPS systems and other technological advances, could be linked into a simulation, providing a virtual view of real world entities such as airplanes and vehicles. When the simulated entities are fed back into the real world entities’ display systems, such as heads-up displays in a cockpit, the full integration of Live, Virtual, and Constructive

6


Journal of Virtual Worlds Research- VW Interoperability 7

simulation is achieved. All of this has been done with the DIS protocol, which has proven to be very robust, and a good vehicle for interoperability between very different kinds of systems. In the 1990s, the DOD started building a new simulation interoperability standard known as the High-Level Architecture (HLA), which later was standardized as IEEE 1516iv. Unfortunately, this standard was more concerned with things like supporting constructive and event-based simulation at non-real-time pace, thus supporting vendor-specific solutions to the problem of distributing time management into the GALTv (Greatest Available Local Time), rather than defining any goal of “plug-and-play” interoperability between disparate systems. In the end, HLA is an API specification, not a wire protocol, and thus, two simulators that want to interoperate have to use the same API implementation. API implementations are commercially available from vendors like MAK Technologies, Pitch, or the large system integrators. Additionally, HLA allows each simulation to define its own object model, using a text-based format describing the FOM (Federation Object Model) to use for the simulation. All in all, this means that hooking up two separate simulators with HLA requires significantly more work than hooking them up using DIS, because DIS is a wire protocol with well-define object model, whereas HLA requires re-linking (and in some cases re-compiling) as well as FOM mapping to work right. For those of us mostly interested in real-time interoperability, it is generally understood among many practitioners that HLA does not meet the interoperability requirements as well as DIS does in practice. While we at Forterra have made sure that our system can interoperate using HLA, no customer of ours has yet actually used that particular technology. Since 2005, the On-Line Interactive Virtual Environment (OLIVE) from Forterra Systems has been able to participate in a DIS simulation, exchanging vehicle, avatar and fire/explosion data with live, virtual and constructive simulators inside the DOD. It is even possible to join two separate OLIVE systems (or other virtual worlds using DIS, if they were available) into the same simulation, achieving a high degree of interactive interoperability between different virtual worlds. This positive experience suggests a fruitful way forward for future virtual world interoperability, which I will discuss below. Almost every installation of OLIVE now comes with some sort of interoperability, and the main form of interoperability requested is where multiple systems are merged together to form a “super-system,” that integrates the capabilities of all systems into a richer capability, affording users the benefits of all the systems that are integrated. Organizations as diverse as Accenture Consulting, InWorld Solutions, and ACS are finding that virtual worlds are often more effective than traditional means of meeting and collaborating (such as conference calls or video conferencing), and often can deliver something close to the experience of an in-person meeting at a fraction of the cost. Often, the cost is even lower than the cost of a phone conference! Teleporting Between Worlds: A Detour In 2008, OpenSim open source virtual world project members showed a demo, where they teleported avatars from a Second Life simulator instance to an OpenSim simulator instance. Unfortunately, the assets involved in representing the avatars were not available at the destination, so all the avatars ended up with the default look. Before transportation, users of the OpenSim simulator could not see the users that were in the Second Life simulator; after transportation, users of the Second Life simulator could not see the users that moved to the 7


Journal of Virtual Worlds Research- VW Interoperability 8

OpenSim simulator. Further, the client from within which teleportation was done used the Second Life client/server protocol, and the source and destination servers both used the Second Life scripting and geometry system – the main thing that was transported between the two systems was the identity, using an identity authentication system similar to the available OpenID protocol, and a hand-off between servers where one client was instructed to disconnect from one server and connect to another server. While an interesting experiment, the value of the capability is currently low. Interoperability that demands that all parties use the same simulation, networking and rendering technology at a low level, is no more interoperability than cell phones that can only call other phones using the same wireless technology. Further, even had the teleport included the details of the avatars (look, behavior, and other details), it’s unclear what the added value is worth, compared to the users just logging out using the Second Life client, and logging on using the OpenSim client – interoperability, in that guise, is a convenience that saves the user some hassle, but does not deliver any new capabilities compared to a “parallel” or “side-by-side” situation. This is in stark contrast to the very real, additional capabilities that protocols like DIS have already delivered to virtual simulations for 20 years or more. Thus, it is reasonable to believe that more value will be delivered to users if interoperability involving multiple systems at the same time is achieved, than if simple “browser” interoperability is achieved because in the “browser” model, only a single virtual world can participate at a time. Leaving one world, means that you leave all the capabilities of that world behind, and take on different capabilities in the new world. It would be more desirable if you could merge the two worlds, in effect providing some form of union of the capabilities of both the worlds. Use Cases: A Way to Focus Given that entertainment use of virtual worlds is largely focused on experiences, usually created by other users, the benefits of interoperability between different virtual worlds in that area seem diffusely understood at best. Meanwhile, there is a clear need for interoperability in the world of enterprise and “serious” virtual worlds, where merging systems together creates clear benefit to business users. Thus, it stands to reason that one driver of virtual world interoperability will be just these business users, trying to merge systems together to create a better tool for getting their core job done. Against this background, I have extracted five separate use cases, which I describe in some detail below. It is my hope that these use cases will contribute some focus to the global discussion of virtual world interoperability, and provide food for thought when standards bodies like IEEE, IETF, or MPEG start considering the needs of virtual worlds. Use Case 1: Friend Invite 1. User A uses virtual world system A that complies with simulation interoperability standards. 2. User B uses virtual world system B that complies with simulation interoperability standards. 3. User A wants user B to visit him/her in system/world A, and gets a suitable URL from his/her system (A), which he/she sends this to user B using any transport (through either mail, IM, integrated communication, carrier pigeon, and more). 4. User B clicks/activates this link in a browser, e-mail client, or similar.

8


Journal of Virtual Worlds Research- VW Interoperability 9

5. After a brief "loading" screen, user B sees user A in user A's environment, including a representative form of any simulated object in that environment. 6. User B can interact at some level with the objects from user A. 7. Objects that user B take out of inventory show up in some representative form for both user A and user B. 8. User A can interact at some level with any objects that user B bring out of inventory. The benefit of this use case is that users of different virtual worlds can invite and communicate with each other using the virtual world metaphor, regardless of the particular virtual world technology used for their "home base" virtual world, presumably purchased and supplied by their employer or one of many third party virtual world service providers. Use Case 2: Collaborative Training 1. Company A operates a chemical plant in city B. Company A uses virtual world system A to do simulation/training/command-and-control of its plant. 2. City B has an emergency response organization that uses virtual world system B for training and scenario planning. 3. At a defined time, company A and city B agree to connect their worlds for a defined duration to conduct a training exercise related to a fire in the chemical plant. 4. At the defined time, a representation of the detailed model/simulation of the chemical plant shows up at the right addressing the virtual world for the city workers. 5. At the defined time, city workers (ambulances, fire trucks, and others) become visible to the chemical plant workers. 6. Interactions between users of the systems include conversations (voice, simulated radio, PSTN). 7. Interactions between users of the systems include a display of the fire as it propagates based on company A simulation models. 8. Interactions between users of the systems include the ability for firefighters to pour water (or other agents) onto the fire, and have the simulation respond. 9. Interactions between users of the systems include the ability for city workers to load a chemical plant worker into a city ambulance. 10. At the pre-determined time, the interoperability ends; the city disappears from the company plant, and the company plant disappears from the city model. 11. Session record/review capability used by the city in virtual world B includes all communications and interactions made in the system including those internal to company/world A. The benefit of this use case, in addition to the Friend Invite use case, is that interoperability can be limited in time and (virtual) space to protect potentially sensitive information. Additionally, this use case shows the benefit of defining interactions between objects operated by one system with objects operated by another system, leading to synergistic simulation similar to that evidenced by the DIS protocol, but applicable to a broader, nonmilitary audience. 9


Journal of Virtual Worlds Research- VW Interoperability 10

Use Case 3: Scene Transfer 1. A user of virtual world A has prototyped an interesting environment. 2. The user decides to donate that prototype to an organization that uses virtual world system B. 3. The user &q