Artificial Intelligence leads to catastrophe - KlP

Page 1


Artificial intelligence (AI) now leads inevitably to a catastrophe

February 2, 2025

In the science fiction (SF) novel The Second Apocalypse(1) (2017), I described the salvation of civilization by robots when humanity had wiped itself out due to its own stupidity. This salvation, even with the potential for a new creation, was thanks to artificial intelligence. Many SF stories revolve around the theme of a takeover of power by robots. At the time, I did not think of that. My robot development led to a symbiotic human-robot society, which collapsed due to human blundering. Therefore, I do not belong to the original AI doomsayers.

That has now changed. It's due to the rapid development of AI. The technology is advancing faster than the reflection on its implications.

One of the most well-known SF authors, Isaac Asimov (1920-1992)(2), a biochemist who wrote many SF novels, often speculated about the conditions that AI systems should meet in order to avoid disasters. The most famous of these is the prohibition on killing or harming humans and the obligation to prevent that. But even this seemingly simple and obvious condition is paradoxical. It does not provide an answer to what action to take in a situation where saving one person is only possible by harming or even killing another. Even an obvious algorithm like weighing life or death based on numbers raises serious ethical problems.

The AI systems, which are now capable of astonishing feats of knowledge, are still limited in their ability to act independently. But even without engaging in murder themselves, they are already dangerous. What would happen if a ruler presented a modern AI system with the question: "What should I do to prevent my people - 20 million - against starvation due to an

energy shortage in the next five years?" And the system responds, "With the currently available knowledge, the optimal solution is: kill the 8 million people in your neighbouring country and use the oil in the ground there until your own nuclear plants can take over the energy supply."

I believe it is high time, and maybe already too late, to integrate ethical considerations into AI systems or to curb AI development.

The problem I raise here is so enormous that I'll set it aside for my peace of mind. However, there is another consequence of AI that I intuitively, or perhaps rationally as scenario foresee emerging on a grand scale in the coming years. That is the unstoppable growth of universal trust in AI assistance.

I wrote a fairy tale recently(4). When it was finished, I realized I have many English-speaking friends who would also enjoy it. Smiling, I thought of AI. Why not be lazy for once? So, I searched for ChatGPT on the internet, copied my fairy tale, and politely asked if it would translate it for me. My screen offered a box to paste the text. Click, and the first lines appeared. The rest of the screen was too small. I scrolled down, and lo and behold, my entire original text was there. What now? Somewhere at the bottom, there was a small icon with a downward arrow. I clicked on it, and suddenly there was an English-language 'fairytale.' I read it from start to finish with complete astonishment and... discovered no errors! Filled with admiration for the people who put this together, I went to bed. But I couldn't sleep. What a beautiful, terrible invention!. Why still go to school and learn languages? Math? Geography? There's so much human knowledge available on the internet that it exceeds everything our brains can process in bytes. And the AI system can instantly access it and logically draw conclusions.

But what does it do with all the nonsense on the web? What about contradictory documents, what about lies? I was shocked. I imagined people on the streets, at the beach, on café terraces, yes, even my visiting friends at home, and I with them, engaged in lively conversation, their mobile phones or iPads at hand, typing on the screen to back up their arguments. People who convince their conversation partners with

arguments and images found on the web. Look, AI shows or says it! What a system to shape the world in your favour! A few filters preventing unwanted news, only reliable sources, no undesirable facts or data, no unwanted conclusions; a single "yes" where "no" stood, but most often correctly quoted with your favourite conclusions. Correct for the creators and owners of the system, with no unwanted objections. Such a perfect infallible system to shape the world has never been available to popes, prophets, kings, dictators, or priests. A world order the humanity itself desires! You don’t need to pay an army for it! Better control than police, armies, newspapers, priests, or slave hunters.

A source of all knowledge that teaches the people what they should think. And a people that loves to invoke that convincing knowledge in understandable, personalized language and terms. A treasure trove of knowledge right at hand and indispensable. And the best part: exactly the knowledge that the system technicians and the system owners want to impart to the people, with no bothersome critics or contrarians.

O yes, enough controversies about matters that they can argue about, because that keeps them divided and rudderless. But not on issues that the system owners find uncomfortable, such as their unlimited protection of property. No matter how large that property becomes, even if nothing else can be owned beside it by anyone.

Fear gripped my heart. There's no escaping it. My wife, to whom I shared this, immediately went searching on... the web. Through 'links' that appeared wanted or unwanted on the screen, she found a message from an unfamiliar source.

The author Mark Hofman, editor of the website(3) wrote: "Security Researcher Leaves ChatGPT Due to Fear of AI Speed Steven Adler has resigned from his position as a security researcher at OpenAI. His departure is noteworthy, not only because he is one of many who left OpenAI last year, but mainly due to his concerns about the development of ChatGPT and AI in general."

So I am not the only one who is concerned.

C. (Kees) le Pair, Cha-am, February 2, 2025.

Notes:

C. le Pair: The Second Apocalypse.

Isaac Asimov.

Mark Hofman

C. le Pair: Pim's discovery journey to infinity.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.