
10 minute read
AI SUFFERS A VOTE OF NO CONFIDENCE
As AI’s pioneer raises existential questions about the proper functioning and usage of the technology, it is time to examine what is at stake.
Since ancient times, the concept of inanimate objects gaining sentience and intelligence has existed, with myths about robots in Greek culture and the creation of automatons by Chinese and Egyptian engineers. The term “artificial intelligence” was coined at a conference at Dartmouth College in 1956, marking the formal establishment of the field.
Advertisement
The history of AI has seen periods of intense activity followed by lulls, including a notable drop in progress from 1974 to 1980, known as the “AI winter”. However, funding from the British government in the 1980s reignited interest in the field, to compete with Japanese efforts.
Another AI winter occurred from 1987 to 1993, coinciding with the collapse of the market for early general-purpose computers and a reduction in government funding.
Nevertheless, research in AI continued, culminating in IBM’s Deep Blue computer defeating Russian grandmaster Garry Kasparov in a game of chess in 1997. More recently, in 2011, IBM’s question-answering system Watson beat reigning champions Brad Rutter and Ken Jennings on the quiz show “Jeopardy!”, signalling a significant milestone in the field of AI. Today, AI is behind the algorithms that dictate what video-streaming plat- forms decide you should watch next. It can be used in recruitment to filter job applications, by insurers to calculate premiums, it can diagnose medical conditions.
The recent acceleration in both the power and visibility of AI systems and growing awareness of their abilities and defects have raised fears that the technology is advancing so quickly that it cannot be safely controlled. Hence the call for a pause and growing concern that AI could threaten not just jobs, factual accuracy, and reputations but the existence of humanity itself.
The Biology Behind The Science
In 2012, Geoffrey Hinton and two of his graduate students, Ilya Sutskever and Alex Krishevsky, developed a neural network capable of analysing thousands of photos and self-teaching to recognise common objects like flowers, dogs, and cars.
Google recognised the enormous potential of this technology and quickly purchased the company Hinton and his students founded for $44 million (€40.18 million). This technology paved the way for AI chatbots like ChatGPT and Google’s Bard. Sutskever, one of Hinton’s students, eventually became the chief scientist at OpenAI, the company that created ChatGPT. Although large language models are built from massive neural networks with millions of connections, they are still tiny compared to the human brain.

Hinton explains that while our brains have roughly 100 trillion connections, large language models have up to half a trillion or, at most, a trillion. However, models like GPT-4 possess knowledge far exceeding that of any individual person, leading Hinton to suggest that perhaps they employ a superior learning algorithm than our brains.
Training neural networks is widely considered inefficient compared to the human brain. It takes massive amounts of data and energy to train them, whereas the brain can learn new concepts and skills quickly, using only a fraction of the energy required by neural networks.
According to Hinton, when comparing the speed of learning a task such as this between a pretrained large language model and a human, the human’s advantage disappears. It is worth noting that AI chatbots are just one aspect of artificial intelligence, albeit the most popular one at present.
What we are seeing now, though is the rise of AGIartificial general intelligence - which can be trained to do a number of things within a remit. So, for example, ChatGPT can only offer text answers to a query, but the possibilities within that, as we are seeing, are endless. This space has a few options, including ChatGPT, Bing, Bard, and Ernie.
Since its release in November, ChatGPT has been making headlines for its remarkable abilities, including responding to complex questions, generating code, planning vacations, translating languages, and even writing poetry.
Just two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, introduced Bing, which can engage in open-ended conversations on almost any topic, to its Bing engine.
Meanwhile, Google’s chatbot, Bard, was released in March to a limited number of users in the United States and Brit ain. Initially designed as a creative tool for drafting emails and poems, Bard can generate ideas, write blog posts, and provide factual or opinion-based answers to questions.
In March, China’s search giant, Baidu, unveiled its first major ri val to ChatGPT, called Ernie (Enhanced Representation through Knowledge Integra tion). However, the bot’s de but was marred by a failed “live” demonstration, which turned out to have been pre-recorded.
Leading Light Exposes Uncomfortable Glare
Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern AI, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.
Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. In a nutshell, this is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models.
They also trained a neural network to predict the next letters in a sentence, a precursor to today’s large language models. One of these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the development of ChatGPT.
Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realise that machines are on track to be a lot smarter than he thought they would be. And he is scared about how that might play out.

These models stand to transform humans’ relationships with computers, knowledge and even with themselves. Proponents of AI argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power.
To others, the fact that AI’s capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the machine that outsmarts its inventor, often with fatal
Those capabilities became apparent to a wider public when ChatGPT was released in November. A million people had used it within a week; 100m within two months. It was soon being used to generate school essays and wedding speeches. ChatGPT popularity, and Microsoft’s move to incorporate it into Bing, its search engine, prompted rival firms to release chatbots too.
The recent acceleration in both the power and visibility of AI systems and growing awareness of their abilities and defects have raised fears that the technology is advancing so quickly that it cannot be safely controlled. Hence the call for a pause and growing concern that AI could threaten not just jobs, factual accuracy, and reputations but the existence of humanity itself.
In Hinton’s view, work in this field should be halted until it is well understood whether it will be possible to control AI.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said in the interview, in which he warns about the excessive speed at which advances are being made. He admits that even he thought this rapid pace of development was 30 to 50 years away, but it is evident that they are much closer than that! He added he was also concerned about the “existential risk of what happens when these things get more intelligent than us.
In addition, the rate at which this intelligence multiplies is astronomical and when one person learns anything new, it gets transferred automatically to the rest of the community. This makes chatbots a huge Pandora’s box that automatically gets updated every few seconds with reams of information that makes them much smarter than any human.

Other leaders in the field of AI research share his concern that the technology may present a significant risk to humanity. In a similar vein, last month, Elon Musk revealed that he had a disagreement with Google’s co-founder Larry Page over the latter’s apparent lack of attention to AI safety.
Danger Signals
Hinton expresses deep concern over the potential misuse of the very tools he helped create, particularly in shaping pivotal human events such as elections and wars. He warns that as smart machines advance, their ability to create their own subgoals could have dangerous consequences, especially if applied to immoral objectives.
Some experimental projects, such as BabyAGI and AutoGPT, already connect chatbots to other programs, enabling them to carry out simple tasks. While these may seem like small steps, they point towards a troubling direction that some individuals wish to pursue.
Even without malicious intent, subgoals pose a risk. For example, one subgoal a machine might set for itself is to replicate, leading to an alarming question: is this something we want?
In March, a group of over 1,000 technology leaders, researchers, and experts in the field of artificial intelligence signed an open letter expressing concern about the potential risks that AI technologies pose to society and humanity.
The group, which included high-profile figures such as Elon Musk and the owner of Twitter, called for a sixmonth halt to the development of the most powerful AI systems in order to better understand the dangers associated with the technology.
While AI systems, such as large language models, can assist workers in generating ideas and completing tasks more efficiently, experts like Dr Bengio have warned that these models can also learn unwanted and unexpected behaviours. For example, these systems may generate untruthful, biased, and toxic information. Additionally, even advanced models like GPT-4 may produce inaccurate or fabricated information, which is referred to as “hallucination.” Experts have analysed the risks posed by AI in the short, medium and long term. In the immediate short term, disinformation is the biggest threat. They worry that the persuasive nature of these systems could blur the line between fact and fiction, making it challenging to differentiate between truth and falsehood. This could lead people to rely on these systems for making critical decisions, such as seeking medical advice or emotional support.
Moreover, there is a growing concern that these systems could be intentionally used to spread misinformation, leveraging their human-like conversational abilities to deceive and manipulate people. Dr Bengio has pointed out that “We now have systems that can interact with us through natural language, and we can’t distinguish the real from the fake.”
In the medium term, job loss is of grave concern. While technologies like GPT-4 currently work in collaboration with human workers, there are worries that they could eventually replace certain professions, such as content moderators.
While they are not yet capable of completely replicating the work of lawyers, accountants, or doctors, they could potentially replace jobs such as those of paralegals, personal assistants, and translators.
The long-term risk is perhaps the most worrying because it refers to an overall loss of control. Over 1,000 technology leaders signed an open letter expressing concerns about the potential unexpected problems that could arise from the development of artificial intelligence.
While some people fear that A.I. could become uncontrollable or even pose a threat to humanity, many experts believe that such fears are exaggerated. Instead, they warn that as A.I. systems learn from vast amounts of data, they may acquire unexpected behaviours that could be problematic. Additionally, they worry that integrating A.I. with other internet services could give these systems unanticipated powers by allowing them to write their own code.
The chatbots’ capabilities rely on probabilistic prediction models and large training data sets provided by humans, which can be directed to achieve any desired outcome. This raises concerns about their potential to create a convincing reality and foster trust where it should not exist.
The development and implementation of A.I. technologies can surpass our capacity to comprehend their impacts, leading to unintended consequences. Therefore, we must assess these consequences before placing blind faith in A.I., as the problem with A.I. is not its lack of artificial intelligence but rather our blind reliance on it.
Responsible Approach
Hinton is advocating a responsible approach to the development of AI by attempting to collaborate with technology leaders to address the potential risks associ- ated with this rapidly-evolving technology. He suggests that the international ban on chemical weapons could serve as a model for regulating the development and use of dangerous AI.
However, given the fast-paced advancement of AI, it is difficult for society to keep up with its capabilities. Therefore, regulation, legislation, and international treaties must be adapted to address the real concerns about bias, privacy, and intellectual-property rights raised by existing AI systems. It is crucial to balance the potential benefits of AI with an assessment of the risks and to remain prepared to adjust regulations accordingly.
At present, there are three distinct approaches taken by governments regarding AI regulation. At one end of the spectrum, the UK has proposed a “light-touch” approach that involves no new regulations or regulatory bodies but applies existing rules to AI systems. The US has taken a similar approach, though the Biden administration is seeking public views on what a regulatory framework might look like.
On the other hand, the EU is taking a tougher stance. Its proposed law categorises different uses of AI by the level of risk involved and mandates increasingly strict monitoring and disclosure as the risk level rises. For instance, the regulation forbids certain uses of AI, such as subliminal advertising and remote biometrics.
Some argue that an even more rigorous approach is required. Governments must regulate AI like medicines with a dedicated regulator, stringent testing, and pre-approval before public release.
China is already taking some measures in this direction by requiring companies to register AI products and undergo a security review before release.
Assessment
AI has entered everyday life as we know it through the use of CHatGPT, Bing, Bard, and Ernie to name a few. While AI chatbots have gained significant popularity, they represent only one facet of artificial intelligence. As science advances rapidly, concerns about the effective and judicious use of this feature gain prominence.
It is clear that AI offers enormous opportunities, but it is essential that this tech revolution is cushioned with safety and control. While pioneers such as Dr Hinton are experts on the science, there is clearly more that needs to be done on the policy front, which is the responsibility of the government. Leaders have gone on to suggest that we might need to press the pause button on the pace of this tech development in order to fully understand the possibilities.