5 minute read

Artificial Intelligence: Psychological Reality or Technological Mimicry?

Daniel Stoljar, Professor pf Philosophy, ANU

The Chief Scientist of Australia, Dr Alan Finkel, has proposed a ‘Turing Certificate’ to deal with what he sees as a new and pressing moral challenge: the ethical implications of artificial intelligence (AI).

Advertisement

The reference is to Alan Turing, the UK mathematician who in 1950 published Computing Machinery and Intelligence, widely regarded as the foundational document of AI. The project of AI, as Finkel characterises it, is 'to produce human intelligence without the blood, tissue, and goo.'

But Finkel’s proposed certificate conflates two different notions of AI. The first raises a moral challenge that is new but not pressing, the second raises a challenge that is pressing but not fundamentally new.

The conflation here is not unique to Finkel; in fact, it is a feature of a general anxiety about AI, present in different forms in both traditional and social media. He is just an eloquent and distinguished example.

Since those systems could in principle come about in artificial things rather than naturally occurring things, you could have artificial intelligence, human intelligence without the goo.

What are these two notions of AI?

One notion – let’s call it psychological AI – has its home in the attempt to understand the psychological capacities and achievements of human beings, the capacities to speak a language, think, reason, perceive and so on. This is a huge and multi-faceted task that draws on many disciplines including linguistics, psychology, neuroscience, philosophy, computer science and others.

A guiding idea behind psychological AI is sometimes called the computational theory of mind. This is both an empirical hypothesis and recommendation for research. The hypothesis is that humans have the capacities we do because we have various computational systems. The recommendation is that to understand those capacities you need to understand the systems.

Alter the android, National Museum of Emerging Science and Innovation (Mirakian), Tokyo, Japan (Maximalfocus/Unsplash)

Alter the android, National Museum of Emerging Science and Innovation (Mirakian), Tokyo, Japan (Maximalfocus/Unsplash)

From this point of view, to understand how we speak a language, for example, you would need to understand the underlying computational systems – how they work, how they interact with other systems, how they develop in individuals, what their evolutionary history is, and so on.

The computational theory of mind means that AI is possible in principle. Suppose we have the capacity to understand a language because we have certain computational systems. Since those systems could in principle come about in artificial things rather than naturally occurring things, you could have artificial intelligence, human intelligence without the goo.

At the same time, the likelihood of AI in this sense is extremely remote – a ‘fantasy’ according to scientists Gary Marcus and Ernest Davis. You don’t need to be a Cartesian dualist to say this – someone who thinks the mind can’t be explained scientifically at all. You just need to appreciate how intricate the computational systems underlying thought or language must be, and how limited our current understanding of them is.

What pressing moral challenge does AI in this psychological sense present? Basically, none. Since the likelihood of AI in this sense is so low the question of how to react if it occurred is not an urgent practical matter.

That is not to say that this ‘what if’ is uninteresting. On the contrary, questions like this are well worth investigating since they teach us about the scope of moral and other principles. But they are not questions of immediate moral concern. It is like asking what would have happened if the Neanderthals had not died out in their evolutionary competition with humans, and continued to live amongst us, perhaps (unfortunately) as second-class citizens – an excellent question, but not an urgent one.

Does this mean that there are no pressing ethical questions surrounding AI? No, because the second notion of AI does raise urgent and serious moral questions.

This notion of AI, technological AI, has its home in the attempt to build technologies that duplicate or mimic things that humans do. That is the notion at issue when people express concern about driverless cars, drone warfare, or the huge data sets used in medical diagnosis, advertising, and political campaigns.

Technological AI is quite different from psychological AI. Mimicry is not reality. That a machine mimics something we do within limits does not begin to show it’s doing what we do when we speak or reason. Moreover, as psychologist Chaz Firestone has recently pointed out, contemporary AI machines often make mistakes no human would dream of making – that’s good evidence their underlying computational nature is quite different from our own.

Technology on its own is neither good nor bad, but it can greatly amplify the human capacity for both. It allows a factory to sack its employees but also opens up the possibility of new jobs in other domains. It can give you the power to wipe out an entire city, but also to produce a vaccine for COVID-19.

Technology on its own is neither good nor bad, but it can greatly amplify the human capacity for both.

Technological AI is no different. It places unprecedented levels of information seemingly at our fingertips but presents it in way that may entrench existing inequalities – a point explored in different ways by many different writers, including Ruha Benjamin, Kate Crawford, and Safiya Noble. No wonder so many universities, such as Oxford and ANU, have established centres to research the ethics of AI.

Interesting and important as they are, the underlying form of these challenges is familiar from an historical point of view. The struggle against the de-humanising effects of technologies and the people that control them did not start with Turing’s 1950 article. Charlie Chaplin portrayed it brilliantly in Modern Times well before Turing wrote his famous paper. And don’t forget Blake’s dark satanic mills.

While the real target of Finkel’s Turing certificate is this second notion, his rhetoric often invokes the first. 'We want rules that allow us to trust AI, just as they allow us to trust our fellow humans', he writes. But if ‘AI’ here is the psychological notion, we don’t at present need such rules, and if ‘AI’ is the technological notion, the proper object of trust is not AI systems but the human beings that make and use them.

None of this is to criticise Finkel’s underlying idea; having a certificate of the sort he suggests may be helpful in dealing with technological AI. But, while we should of course try to react to the moral challenges that confront us, a big part of doing so is identifying them correctly.

Caption Charlie Chaplin and the feeding machine in a scene from Modern Times, 1938 (United Artists/YouTube)

Caption Charlie Chaplin and the feeding machine in a scene from Modern Times, 1938 (United Artists/YouTube)

This article is from: