
3 minute read
Do we need to press pause on AI?
Almost
“The question of whether we should pause development of artificial intelligence (AI) has a straightforward answer in my opinion –and that is no.

“We should not because good applications of AI bring immense advantages to humanity and the planet.
“However, we must put into place safeguards that protect us from inadequate development practices and to prevent AI from being exploited by bad actors.
“I’d like to delve into the subject, examining the various types of risks associated with AI technology and its applications.
“Focusing on the risks of AI technology first, we have been conditioned by science fiction and movies to fear self-aware and autonomous machines that take over the world and wipe out the human race.
“In reality and currently, AI technology is still in its infancy and works through a lot of number crunching and pattern matching.
“A chatbot, for example, does exactly that; it has no understanding of the meaning of the words it strings together in a sentence in reply to your question. It merely matches the words that fit as the best answer to your question in the context it has been trained in.
“The technology is not self-aware – at least, not yet. It is with self-awareness that the sci-fi style threats arise, where the machine becomes cognisant of its power and then acts in its own self-interest.
“As with any system, data quality can affect AI’s analysis and output. The quality can be affected by a number of issues such as the size of the data set used to train the AI, and how representative it is of its potential inputs when it goes live.
“Poor quality data can lead to issues such as some people receiving faster and better services than others, for example in facial recognition more accurate ID confirmation of men than women because the images that were used to train the AI had more images of men.
“There may also be poor development where the intelligent application has been simply badly designed and developed and will be limited in its capabilities. These types of problems can be dealt with through best practice in design, development, and testing of AI.
“Another issue is that AI can learn bad behaviour from its interactions with malicious users. It is therefore necessary to monitor its output and to get feedback from its end-users to ensure its ongoing fitness for purpose.
“AI in the hands of bad actors can go further, for example deep fake technology that is used to alter real videos to damage the reputation of high profile people or spread misinformation. In the wrong hands AI can be used maliciously for many purposes from cyber crime to controlling human lives.
“My answer to pausing AI development is still no, but we should create a framework that allows for the responsible development of beneficial AI by putting safeguards in place and developing solutions to mitigate the potential risks posed by AI in the hands of bad actors.
“The first point has been covered well in a paper by BCS, The Chartered Institute for IT, titled: Helping AI grow up without pressing ‘pause’.

“The second point is also crucial in my opinion. We already have experience of the dangers posed by uncontrolled technological advancements in the huge rise in cybersecurity threats and fraud that we are seeing stemming from digitalisation of commerce, banking, and other sectors.
“No one foresaw this situation when the shift from brick and mortar operations to online happened.
“While there are many extremely useful and reliable applications of AI already in daily use by many of us, there may be unforeseen consequences of AI too. We should therefore think about the future risks of AI and how to mitigate them.

“I wonder if an international institute could be set up to come up with risk mitigation and other solutions? Such an institute could also provide guidance on best practice and offer training courses for would-be AI developers of the future.
“We must not forget that AI is still in its infancy and that its risks may not materialise.
“However, it is also possible that AI or its malicious use will pose a serious threat to humanity in the future. We need to be aware of the potential dangers of it or its malicious use and take steps to mitigate them while also working to develop AI in a responsible and ethical manner.”