
2 minute read
WHEN AI ECLIPSES _HUMANITY
from April 2023w
THE SINGULARITY— the time when super intelligent computers surpass human understanding and shed human control— may soon be upon us / by
Ed McKinley
Advertisement
uppose a medical research team asks a chatbot to develop a vaccine to eradicate every variant of COVID19 in humans. It’s a perfectly reasonable request that could go shockingly wrong.
The machine might formulate a drug that renders recipients infertile, thus reducing the population to zero and eliminating the virus. That perfectly logical but chillingly cold solution achieves the goal but at the cost of pushing our species to the brink of extinction.
Perhaps the example seems extreme, but it’s far from absurd.
“This is exactly how a pure optimization process solves problems,” warns Roman Yampolskiy, a University of Louisville professor of computer science who’s written extensively on the subject. “People can fix that, but there are infinite similar possibilities.”
What’s more, the smarter AI gets the more dangerous it becomes, Yampolskiy says.
Knowing that, how concerned is he about the threat inherent in artificial intelligence? “I’ve devoted my life to it,” Yampolskiy tells Luckbox in a flat tone of voice. “I don’t see anything more important.”
But his life’s pursuit must get lonely. Despite doomsday warnings from generations of artists, mathematicians, engineers and entrepreneurs (see sidebar “You’ve Been Warned: The Dangers of AI”), hardly anyone seems willing to stand in the way of the explosive expansion of AI.
Of the hundreds of thousands of AI researchers in the world, perhaps 100 work full time on AI safety with another 200 or so delving into related areas such as ethics or algorithmic justice, Yampolskiy notes. “I’m guessing here, but I don’t think it’s much bigger than that,” he says of his estimates.
Moreover, many of the scientists devoted to AI safety aren’t ensconced in academia—instead they’re working for big public companies like Alphabet (GOOGL), which owns the DeepMind computer labs, and smaller ones like privately held OpenAI, which produces the ChatGPT chatbot that’s making headlines daily.
Public or private, companies have a vested interest in developing and selling AI and don’t want to forfeit competitive advantage by slowing the technology’s progress, Yampolskiy notes.
Heaven or Hell ?
No one knows what will happen when artificial intelligence reaches the singularity—the point where it’s too smart for humans to control.
Whatever their motivations—financial or scientific—researchers tend not to consider the worstcase result of AI, he maintains. He calls it “the possibility of impossibility.” It’s the idea that no matter what scientists do they can’t stop AI from wreaking havoc on humankind.
Mounting danger
Artificial intelligence has been with us for some time now, beginning perhaps in 1935 with a paper Alan Turing wrote to describe a machine with memory, computing power and the ability to scan symbols.
AI has apparently reached the latter part of the first of three stages. It’s now AI, which can duplicate human thought processes. Soon, it may enter the phase called AGI for artificial general intelligence, where it can equal human mental capacity. After that comes the singularity—artificial super intelligence or ASI—where machines become so smart that humans can’t control them.
Computers in the AI phase work out problems and serve up information with blinding speed. They may beat a human at chess, but they can’t carry on a convincingly human conversation.
Even in this current AI phase, computers pose