3 minute read

The end of humanity?

growing trend of states and political movements using social media for disinformation.

It has also been seen in China, where there has been a wave of state-sponsored internet propaganda in response to human rights violations in Hong Kong and Xinjiang, and in Turkey, where Erdoğan’s government paid 6,000 accounts to attack their opponents on social media.

Advertisement

The advent of Artificial Intelligence (AI) opens up even more opportunities for statesponsored disinformation on social media, as in Venezuela, where the government has been accused of using deepfake technology as part of propaganda.

For many, the concept of propaganda still conjures up thoughts of U.S. Army “I Want You” posters or Stalinist socialist realist artwork from the early twentieth century. This era, however, is long gone. Social media is the new battlefield for an increasingly sophisticated form of propaganda and disinformation, equipped with AI, deepfakes and an army of paid users. The question now is whether there is any hope of regulating it — a duty which will inevitably fall to the likes of Twitter’s Elon Musk and Meta’s Mark Zuckerberg.

Henry Parker, Features Editor, reviews experts warning of AI extinction

FOR most people AI has only ever been a staple of science-fiction, an unlikely pipedream or problem for future generations to grapple with. But in only a year, with the rapid development of software like ChatGPT and Stable Diffusion, it has suddenly become an immediate challenge of the present day, and something that the wider public has become increasingly concerned about.

But it isn’t just general users that are raising concerns about the potential uses and misuses of AI, as experts in the field have been voicing concerns of their own. A statement has been released from the Center for AI Safety (CAIS) that aims to “open up discussion” about “some of AI’s most severe risks."

The full statement read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

What is significant about the request of the signatories, is just how simple and specific of an ask it is, just how many signatories there are, and how high their profiles are in the industry. Stand out names include Sam Altman, CEO of OpenAI which is the publisher of ChatGPT; Bill Gates, known mostly as cofounder of Microsoft, as well as some of the so-called ‘godfathers of AI’, Geoffrey Hinton and Yoshua Bengio. The lan guage of “extinction” is rather apocalyptic, with Bengio likening himself to one of the creators of the atomic bomb, having ushered in a new technology that can, and will, do some destruction to the people and the planet.

Most experts seem to agree there should be caution and regulation when developing new AI technologies, but there isn’t a consensus on the existential threat that is suggested by the statement. For example, Professor of Artificial Intelligence at the University of

Bath Nello Cristianini praised its “well-intentioned” nature, but criticised the “vague statement” on how it failed to describe the nature of this extinction event. It is worth noting that although the comparison to pandemics or nuclear wars are meant to show the severity of the issue at hand, there are now 8 billion humans on Earth, and we have yet to be wiped out, even from the most devastating events of our history.

Although more and more world leaders are echoing the concerns raised by the statement, President Biden’s administration has launched new investments of US$140 million into the new National AI Research Institutes, and the UK Government is looking to spend up to £3.5 billion over the coming decade to become an AI superpower by 2030. In both cases the need for safety as a first priority is emphasised but there is still a competitive instinct behind the government action and the business of innovation.

THE UK GOVERNMENT IS LOOKING TO SPEND UP TO £3.5 BILION [...] TO BECOME AN AI SUPERPOWER

Investments in, and valuations of, companies in the AI field are growing year-onyear and showing no signs of stopping at the same time as calls for stricter and more thoughtful regulations are piling up. As of today, there is no intergovernmental organisation, like the International Atomic Energy Agency (IAEA), which monitors global use of nuclear energy in conjunction with the United Nations and has the authority to ensure that AI is not used for malicious ends, something that would address some of the fear laid out by the statement.

Whilst the people heavily involved in bringing this new technology into greater public use may have hopes that it can be used as a tool for unlocking greater human potential, until the global community cooperates and has safeguards in place for a terrible event, then for now all they can do is use their large voices to outline all the potential risks.

This article is from: