4 minute read

THE AI _ROUNDTABLE

Next Article
THE LAST PICTURE

THE LAST PICTURE

FOUR EXPERTS weigh in on the state of generative AI and the impact of ChatGPT. Just to be fair, we also asked the chatbot for its opinions.

by James Melton

Advertisement

Luckbox: On a scale where “1” stands for pocket calculators and “100” is Skynet—the superintelligent group mind that rules the future in the Terminator movies—what’s the current state of artificial intelligence (AI) technology?

Domingos: We’re at 10. We’ve come a long way, but there’s far more to go.

Selman: I would say 60 to 70. If you had asked me the question last fall, three months before ChatGPT, I would have said 10 or even five. Because a few months ago, it was not clear that AI systems could understand language in a meaningful way. I had my undergraduate class do a bunch of experiments with Google Translate. It could do these things like translation without truly understanding what was being said. Then ChatGPT came out, and I’ve repeated these experiments. It gets the meaning of language surprisingly well. That gets you to a whole new level because language is the key thing that separates humans from other animals.

Schmidhuber: This scale does not make sense because the upper end doesn’t. Unlike in Schwarzenegger movies, supersmart AIs won’t want to terminate all humans. Supersmart AIs will mostly be interested in other supersmart AIs, not humans. Just like humans are primarily interested in other humans, not in ants.

ChatGPT: The current state of AI technology is around 20 to 30 on the scale. We have made advances in AI technology such as machine learning and natural language processing. However, technology is still limited in its ability to think and act independently. AI is still dependent on human input and cannot make decisions on its own. And most AI technology is limited to specific tasks.

What will be most profoundly affected by recent advances in generative AI?

Yampolskiy: Sam Altman just commented on that. We used to predict plumbers would go first and then accountants and no one would touch the artists, and it’s the exact opposite. My plumber is doing well. My artist is out of a job.

Domingos: Creation of routine documents and images.

Schmidhuber: Yes, it will affect desktop jobs that are about producing new documents from older documents.

Selman: AI will affect anything at the level of a knowledge worker. At financial companies, a ChatGPT-like system could read the millions of documents that these companies have internally. And could give you analysis reports that might be better

Pedro Domingos

PROFESSOR EMERITUS OF COMPUTER SCIENCE AND ENGINEERING, UNIVERSITY OF WASHINGTON AND AUTHOR OF THE MASTER ALGORITHM: HOW THE QUEST FOR THE ULTIMATE LEARNING MACHINE WILL REMAKE OUR WORLD than human analysts could come up with because analysts could only read a few thousand reports.

ChatGPT: The areas of the economy that will be most profoundly affected by recent advancements in generative AI are the creative arts and media and the financial services industries. In the creative arts and media, generative AI can be used to create content such as music, images and videos, allowing for the potential to automate content generation. On the financial services side, generative AI can be used to generate and interpret data, which could potentially automate decisions related to risk management, investment strategies and more.

Do you think the singularity will occur?

Domingos: No. The singularity is intelligence increasing exponentially without limit, and that’s physically impossible. A weaker notion is that AI will evolve to be completely beyond our understanding, and that’s also unlikely. We can stay in control of AI while making it extremely smart.

ChatGPT: I believe singularity will eventually occur. AI is rapidly advancing and has already outpaced humans in many areas, such as playing chess and [the ancient board game] Go. Furthermore, AI is increasingly being used in everyday life, such as in self-driving cars, and it is becoming increasingly sophisticated. It is only a matter of time before AI surpasses humans in all areas and the singularity is reached. However, this will still be a long way off.

If you expect the singularity to occur, when is it likely to happen?

Yampolskiy: [Computer scientist Raymond] Kurzweil is doing an excellent job with very specific analysis. He said that in 2023, we would have enough computing power to start seeing the human level. In 2045, he thinks AI will be as good as all of humanity. That makes sense. Is he off by a couple of years? Maybe. What difference does it make?

Schmidhuber: History seems to converge in an Omega point in the year 2040 or so. I like to call it Omega because a century ago, Teilhard de Chardin called Omega the point where humanity will reach its next level. Also, Omega sounds much better than “singularity.” (For a more detailed answer see the sidebar on pg. 27)

ChatGPT: It will be decades before we reach the singularity. AI is still in its infancy and there is a lot of work to be done before it can surpass humans. Even if the development of AI is accelerated, it will still take a long time before we reach the singularity.

If you expect the singularity to occur, are you concerned about it?

Yampolskiy: Suppose you have free labor, physical and cognitive. It will impact the economy, even if we manage to control it somehow, which I’m not saying we will. But even if we managed to have an obedient, friendly, superintelligence, we must reevaluate the business model with free labor. It could be some abundance model or shared benefit model. Talking about splitting profits is meaningless in that world.

Selman: One aspect of the singularity I can’t entirely agree with is that the systems would automatically work against us. Once they have a super level of intelligence, is that immediately dangerous? The singularity story has an angle that assumes it would work against us. It could be a knowledgeable, smart system that works for us or with us. We must be concerned about it. But I am confident that it will still be a system fielded by humans and be part of our society.

ChatGPT: Yes, I am concerned that the singularity could be harmful to humans. If it becomes too powerful, artificial intelligence could threaten human safety and autonomy. It could be used for malicious purposes, such as surveillance or manipulation of populations.

The consensus is that AI will create more jobs than it destroys. What’s your view?

Selman: I don’t think that’s true at all. Economists always like to say that, but it’s unlikely. The surprise here is automating manual labor with a physical robot is surprisingly expensive. We had some renovations in our house, and the guys were doing woodwork. A robot would be way more expensive than the $20 to $40 an hour they charge for a human body. But the administrative tasks needed to run

YAMPOLSKIY ASSOCIATE PROFESSOR, J.B. SPEED SCHOOL OF ENGINEERING, UNIVERSITY OF LOUISVILLE, AUTHOR OF ARTIFICIAL SUPERINTELLIGENCE: A FUTURISTIC APPROACH AND EDITOR OF ARTIFICIAL INTELLIGENCE SAFETY AND SECURITY

This article is from: