14 minute read

IT’S ALL A GAME – READY TO PLAY ON THE CAMPUS OF THE FUTURE? Thomas Mengel

WHY WE HAVE TO START WORKING ON AGI GOVERNANCE NOW

By Jerome Clayton Glenn

CEO, The Millennium Project

AN international assessment of how to govern the potential transition from Artificial Narrow Intelligence (ANI) to potential Artificial General Intelligence (AGI) is needed. If the initial conditions of AGI are not “right,” it could evolve into the kind of Artificial Super Intelligence (ASI)* that Stephen Hawking , Elon Musk , and Bill Gates have warned the public could threaten the future of humanity.

There are many excellent centers studying values and the ethical issues of ANI, but not potential global governance models for the transition to AGI. The distinctions among ANI, AGI, and ASI are usually missing in these studies. Even the most comprehensive and detailed U.S. National Security Commission on Artificial Intelligence Report has little mention of these distinctions.

Anticipating what AGI could become

Current work on AI governance is designed to catch up with the artificial narrow intelligence proliferating worldwide today. Meanwhile, investment into AGI development is forecast to be $50 billion by 2023 . Expert judgments about when AGI will be possible vary. Some working to develop AGI believe it is possible to have AGI in as soon as ten years.

It is likely to take ten years to: 1) develop ANI to AGI international or global agreements; 2) design the governance system; and 3) begin implementation. Hence, it would be wise to begin exploring potential governance approaches and their potential effectiveness now. We need to jump ahead to anticipate governance requirements for what AGI could become. Beginning now to explore and assess rules for governance of AGI will not stifle its development, since such rules would not be in place for at least ten years. (Consider how long it is taking to create a global governance system for climate change.)

The governance of AI is the most important issue facing humanity today and especially in the coming decades. --- Allan Dafoe, Future of Humanity Institute, University of Oxford

* Artificial Super Intelligence (ASI) – used here for consistency with ANI and AGI – corresponds to “superintelligence,” popularized by Nick Bostrom. ANI is the kind of AI we have today: each software application has a single specific purpose. AGI is similar to human capacity in novel problem-solving whose goals are set by humans. ASI would be like AGI, except that it may emerge from AGI (or can be sui generis) and sets its own goals, independent of human awareness or understanding.

n What has to be governed for the transition from ANI to AGI? What are the priorities? n What initial conditions of AGI will be necessary to ensure that the potential emergence of ASI is beneficial to humanity? n How to manage the international cooperation necessary to build governance while nations and corporations are in an intellectual “arms race” for global leadership. (IAEA and nuclear weapon treaties did create governance systems during the Cold War with similar dynamics.) n And related: How can a governance system prevent an AI arms race and escalation from going faster than expected, getting out of control and leading to war, be it kinetic, algorithmic, cyber, or information warfare? n How can governance prevent increased centralization of power by AI leader(s) and by AI systems themselves crowding out others? n If IAEA, ITU, WTO, and other international governance bodies were created today, how would officials of such agencies create them differently, considering ANI and AGI governance issues? n Drawing on the work of the Global Partnership on Artificial Intelligence and others that have already been done on norms, principles, and values, what will be the most acceptable combination or hierarchy of values as the basis for an international governance system? n How can a governance model help assure an AGI is aligned with acceptable global values? n How can a governance model correct undesirable action unanticipated in utility functions?

n How to develop and enforce AGI algorithm audit standards? n How can the use of ANI-to-AGI by organized crime and terrorism be reduced or prevented? n To what degree do thought leaders and primary stakeholders agree about the framing of governance issues? n Should an international governance trial, test, or experiment be constructed first with a single focus (e.g., health or climate change), and then to learn the rules and standards from such experiences to extend to broader governance of the transition from ANI to AGI? n Should AGI have rights if it asks for them? And might this make its potential evolution into artificial super intelligence (ASI) more acceptable to humanity? n Since Blockchain is used by some for decentralized AI development, how could it be (and should it be) included in a governance system? n How can governance be flexible enough to respond to new issues previously unknown at the time of creating that governance system?

Such questions will be reviewed and edited by the steering committee before being used for the interviews conducted during step 1. (See Appendix for other organizations with whom we intend to collaborate and build on their research and analysis of norms, principles, values, standards, rules, audits and international conferences and potential negotiations.)

Initial examples of the kinds of international and global governance models that might be explored in the scenarios (pending feedback in the first two steps):

n Models similar to IAEA, ITU, and/or WTO with enforcement powers n TransInstitution (self-selected institutions and individuals from government, business, academia, NGOs, and UN organizations) n IPCC-like model in concert with international treaties n International S&T Organization (ISTO) as an online real-time global collective intelligence system; governance by information power (MP/Office of Science, DOE study) n GGCC (Global Governance Coordinating Committees): flexible but enforced by national sanctions, ad hoc legal rulings in different countries, and insurance premiums, acting like a decentralized multi-polar monitoring system n ISO standards affecting international purchases n Put different parts of AGI governance under different bodies like ITU, WTO, WIPO n All models should be designed and managed as a complex adaptive system

“TEACHING SCIENCE FICTION AS A LENS ON THE FUTURE”

By Thomas Lombardo

Center for Future Consciousness

BEGINNING in September of 2020 I began a webinar series on Zoom on the evolutionary history of science fiction. Together with Tery Spataro, my administrative assistant for the Center for Future Consciousness, we have produced thus far sixteen webinars. Each webinar consists of an extensive slide show interspersed with discussion periods. All of the videos for these webinars are available for viewing on the Center for Future Consciousness Video School at: https://cfc-school.thinkific.com/. With still more webinars to come, at this point the series has covered ancient times up through the mid 1960s. The webinars are based on my multi-volume book series Science Fiction: The Evolutionary Mythology of the Future.

The participating audience, which averages fifteen to twenty people per webinar, is a highly diverse and global group with individuals representing such areas as futures studies, science fiction, arts and humanities, science and engineering, philosophy, consciousness studies, literature, Eastern studies, and psychology. With roughly a dozen or more regular attendees, who have grown to know each other through the series, there is a good deal of active, ever-evolving dialogue among the participants as they have followed the rich and extensive history of science fiction outlined through the webinars.

Although the webinars have served a variety of educational purposes, including examining how science fiction evolved as a consequence of trends and lines of thinking in human society, and how science fiction influenced the growth of human consciousness through history, what I intend to highlight in this essay is how the “teaching of science fiction”—notably the evolving history of science fiction—in this webinar series has provided an enlightening “lens on the future.” Keep in mind though that this is just a sketch.

As one important revelation, which surprised many of the webinar attendees, thinking and writing about the future has a long history, extending back thousands of

years. As shown through the imaginative narratives of past centuries, humans have been speculating about the possibilities of the future since at least the time of the ancient Greeks. As the philosopher George Santayana stated, “Those who forget the past are doomed to repeat it,” and a review of the evolving history of science fiction reveals a rich tapestry of diverse ideas on the future that futurists should be acquainted with. Fundamental themes and issues in futurist thinking, such as the impact of technology and science on society, the nature of utopian and dystopian futures, the future evolution of humans, and the possibility and nature of progress have a long history in science fiction. Attendees in the webinar series frequently found it both fascinating and educational to learn about and discuss among themselves the history of futures thinking as contained in the history of science fiction.

Connected to this first point, the deep and rich heritage of futurist thinking in science fiction reveals a great diversity of viewpoints on both possible and preferable futures. Teaching (the history of) science fiction expands the range or breadth of imagination regarding possible and preferable futures. One special strength of science fiction is that it does not present just one accepted or dominant perspective on the future but diverse narrative visions reflecting the diverse writers who write within the genre. Science fiction is a pluralistic arena of futurist thought.

Although there are no formal assignments in the webinar series, quite a few of the attendees, their interest provoked by my overviews of classic texts and novels, have been purchasing many of the books cited in the webinars. Without assigning readings, participants appear to be doing a lot of reading, or at the very least creating ever-growing to-do lists of books to read in the near future. Several people have been attending the series to inform and inspire them in their own writing and research.

By looking at both the deep history and the diversity of stories and points of view, attendees have been regularly afforded the opportunity to compare and discuss different futurist perspectives. They can also routinely ask, how much has our thinking really changed over the centuries and millennia? One attendee, a futurist from the Netherlands has often commented that his mind was filled with numerous questions and points to ponder by the end of each webinar.

Although hundreds of writers have been covered in the series so far, two writers to whom I have devoted entire webinars are H. G. Wells and Olaf Stapledon. Aside from their immense influence on the development of science fiction, Wells and Stapledon are important, especially for futurists, since both of them incorporated into the fictional writings a great deal of non-fictional futurist thinking. Wells and Stapledon examined both historical and contemporary trends, current issues and challenges of their time, and speculated on a variety of possible, probable, and preferable futures for humanity. Studying and discussing Wells and Stapledon in depth has afforded attendees the opportunity to see how two highly imaginative and educated minds within our recent history have synthesized in their writings futures studies thinking with science fiction narratives.

As one more “future consciousness raising” feature of this webinar series, science fiction contains and continues to evolve a highly stimulating visual dimension. The imagery of science fiction, encompassing art, cinema, graphics, photos, and book and magazine covers, which are profusely included in the slide shows, provides the attendees a powerful and rich mind-space for contemplating and imagining the future. Pictures are worth thousands of words, and the history of science fiction is filled with thousands of images.

All in all, the webinar series thus far has brought together and connected a great group of individuals for active and expansive discussions on history, science, philosophy, and the possibilities of the future. Science fiction cuts across various disciplines and this webinar series has drawn together a highly interdisciplinary and fascinating ensemble of educated people. For those who have attended, the series has raised and enriched their consciousness on the complex and extensive history of futurist thought, as embodied within the narratives and speculations of writers and thinkers of the past.

There are many stories about many different futures contained in the history of science fiction and the readers are invited to view online the recorded presentations on this ongoing saga of imagination, speculation, and thoughtful reflection at the Center for Future Consciousness Video School at https://cfc-school.thinkific.com/.

TECHNICAL NOTES

DECENTERED FUTURE

By Ralph Mercer

THIS article aims to introduce the concept of ‘Decentred Futures ’ as a means to explore posthumanism as a philosophy/ practice to examine the future. In future technology notes, the futurists’ relationship with technology, the assumptions, methods, and beliefs as a habitus that presently frame our perception of possible futures will be explored in more depth.

As a futures concept, philosophical posthumanism focuses on decentring the human from the discourse and can be described as post-humanism (not post-human), postanthropocentrism, and a post-dualism approach. The ‘post’ of post-humanism does not advocate moving beyond the human species in some biological or evolutionary manner. Instead, offering a new vantage point to understand what or who has been omitted from an anthropocentric worldview.

Posthumanism becomes a futures tool to peel back the layers of our hierarchical legacies that betray our assumptions and biases about our technological partner. Technology and human futures are entangled in the evolution of human social and cognitive development, pre-dating the creation of future studies, literacies, and the accepted boundaries that separate them. The boundaries that separate the human from material world are often a result of humanity’s need to make sense of the world around them and orient themselves in their professions. This human-centred belief seeks to create knowledge about the world to fit human needs; the tools of knowledge creation are our technologies, sciences and metaphors that re-enforce the primacy of the human overall other species. As a result, the discourse around technology is often full of tension and conflict, using metaphors of fear and hope to create stories that directly impact the futures we can imagine or accept.

The assumption and beliefs that construct boundaries give rhythm and stability to the practices of an identified group or profession in what could be described as a habitus , harmonizing their behaviours to some extent. The rhythm, vocabulary allow the individual to have a ‘sense of the game’ and an intuitive understanding of the socially accepted rules of behaviour, acting and talking based on one’s position in a field of work.

‘Decentered Futures’ is a rejection of the ‘rules of the game’ and the separation of the human and technology focusing on elevating the non-human, less-thanhuman, and more-than-human to equal importance when creating stories about the future. This shift in looking at the futures from a different perspective opens possibilities and exposes the often ignored, alternative and disavowed voices in our images of the future.

Planetary conditions require that we urgently rethink our beliefs and interrogate our assumptions, to pause and unlearn, realizing that futures we disseminate through our stories and literature can be re-written and re-told. In a decentred future, humans become entangled, part of the planetary networks, not the central character. It is no longer possible or desirable to separate human agency and identity from the social and technological environments.

The title of this magazine, “Human Futures,” is a small example of the privilege and exceptionalism assigned to being human. The metaphors become a process of re-enforcing the images of the future that involves normalization the belief of the human as the center of the story and resisting images that would disrupt the specific ways of thinking, talking and acting. For many, the concept of decentered futures may be controversial. For some, it may be seen as an erosion of the very essence of what they believe it means to be human.

Futurists and the field of futures studies change how people think the world can be; however, this requires futurists to speak with varied, diverse and divergent voices. Without the constant exploration of new futures concepts, the current beliefs and assumptions will constrain the writings and practices becoming a force to replicate the past and present into the new tomorrow.

Posthumanism offers a means to understand and challenge the assumptions, beliefs and practices that create the habitus that binds the future to a prescriptive path. Decentered Futures accepts we are entangled in the world, not the privileged center and encourages the messy questions about the human place in the images and stories of the future.

Notes:

1 Decentred Futures is created from the works Francesco Fernando book ‘Philosophical Posthumanism (2019)’ and Karen Barad work in the book ‘Meeting the Universe halfway: Quantum Physics & the Entanglement of Matter & Meaning (2007)’.

2 Habitus as used in this article is based on the works of Pierre Bourdieu work “In Outline of a Theory of Practice (1977)” and is presented here as a socially constructed way of knowing which privileges certain modes of thinking about the future.

Future Technical Notes:

2nd article will be human technology entanglement 3rd article futures habitus 4th article decentred futures possibilities

THE OCEAN OF FORESIGHT AND AGILITY: MANATEE, SCHOOL OF FISH, WHALE AND DOLPHIN

By Tyler Mongan