4 minute read

Computer Science

Next Article
Sports

Sports

The Dangers of Artificial Intelligence

The possible invention of Artificial General Intelligence (the intelligence of a machine that could successfully perform any intellectual task that a human being could) has been described by Stephen Hawking as something that could be “the best or worst thing to happen to humanity”. This is due to the fact that powerful AI could both solve some of the biggest problems our world faces, or create even bigger ones.

While highly intelligent AI does come with its threats, it could also bring about a lot of positive changes to our world. If intelligence was created that could evolve to surpass the human races level of intellect, it would have the ability to solve problems that we couldn’t. It could invent powerful new technologies that humans would never have the ability to create, and technology invented by this superintelligence would give humans the ability to do this never before thought to be possible. For example, many speculate that in the future humans will have the ability to upload our minds into computers, or to increase our lifespan indefinitely using cyborg technologies to resist disease and the normal process of aging. While the idea of ‘immortality’ in this way does sound like something pulled from a science fiction novel, there are a small number of people around the world who take this idea seriously and are already going through steps to try and ensure that they achieve it. There are those planning to try and preserve their brains and/or bodies after death, hoping that they can be revived once technology has advanced to provide this kind of immortality. 286 people in June 2014 were being stored in liquid nitrogen, which freezes a person’s tissue immediately after death to preserve their body and ensure that ice crystals do not form within brain cells and damage them.

However, as is often the case when AI is presented in the media, there is always the chance that any superintelligence created could end up disliking the human race, making a catastrophic mistake with devastating consequences, or even just end up in the hands of a human with bad intentions.

Moreover, Artificial Intelligence that was actually conscious would be able to form its own ideas and opinions. While this in many ways could be beneficial, with AI being able to solve problems in ways humans wouldn’t think to solve them, it could also lead to problems. An AI that we created could dislike humans and our actions, or it could have wants and goals that don’t align with ours. After all, unless an AI was specifically designed to do so, it would have no specific reason to want to help humanity achieve its goals. Powerful artificial intelligence would have the ability to do incredible things, but if these abilities were turned against the human race it could possibly lead to our extinction, for how can we hope to win a fight against a far superior being whose intelligence would be constantly evolving far beyond the extent of ours? AI could have the ability to predict humans actions before they have even thought about doing them, by analysing the information of billions of people’s lives and decisions. If humans were forced to try and fight against AI, they would be fighting against an intelligence that could predict their every move. A superintelligence would have the ability to create machines and programs far beyond anything a human could create, and far beyond anything a human would be able to fight against. This type of technology turning against humans, or this type of technology being used in warfare would have a devastating impact on the human race and the planet as a whole.

There is still a lot of debate over whether Artificial General Intelligence will actually have more good results or bad results, and there are many leading scientists with very differing opinions on the topic. As of yet no one has been able to create AI with superintelligence and consciousness, and many believe it is a lot further away than many people believe. Some even wonder if it can be achieved at all.

Miranda Simmons, Year 12

Computer Science Reading Recommendations

The Glass Cage: Automation and Us By Nicholas Carr

Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari

This article is from: