8 minute read

What is missing in AI? A 2021 perspective

ALÍPIO JORGE & JOÃO GAMA

Advertisement

The world is experiencing a digital revolution. It started some decades ago and it is changing our lives, at an ever- growing pace.

Digitalisation enables things ( artificial systems) to perceive their context and be aware of relevant information. This opens possibilities to the development of algorithms that react, make recommendations and plans to fulfil objectives, decide, act in real time, and learn from their successes and mistakes.

In fact, we already have access to algorithms like these. Recommender systems are able to figure out the preferences of millions of users on platforms like Amazon or Netflix.

In some specific medical tasks that require attention to detail, algorithms perform better than human experts.

Computers use Neural Language Models to perform automatic translation at a large scale - and with so much success that translators fear for their jobs. Finally, yet importantly, computers can learn how to produce convincing readable texts and natural looking photographs and videos by learning from examples.

The 5G revolution is coming, promising to fulfil all the promises of the Internet of Things: Industry 4. 0, smart cities, smart grids, smart farming, etc. The world, as we know it, will change dramatically. The impact of Artificial Intelligence (AI) on our society will lead to changes that will be much more profound than any other technological revolution in human history. Which AI do we want? Will AI expand the human experience or replace it? Will AI empower our ability to make decisions that are more informed? Or will it reduce human autonomy?

Will AI create new forms of human activities or make existing jobs redundant?

Current Artificial Intelligence already sounds like the future. So much so that societies are truly concerned about the impact of such an unrestrained power ( and rightfully so). But AI is yet to produce a robust and truly autonomous system, while AI systems are not yet able to perform Artificial General Intelligence ( the ability to address any problem a human would be able to). There are significant challenges to face and very complex problems to solve.

However, we are already concerned about these possibilities, because they seem, more than ever,realistic.

In 2014, Stephen Hawking presaged that AI could end humankind.

AI is often listed as one of the Global Catastrophic Risks side by side with climate change and an alien invasion. (e.g., a robotic surgeon learns a new technique and kills a patient). Moreover, AI already has the potential to make autonomous decisions that may promote unfairness, and to disseminate disinformation campaigns that threaten democracies.

WHAT IS MISSING?

So, what is the future of such a technology? Finding the answer to this type of question is a futile exercise; hence, it is important to be as systematic as possible and minimise speculation. To begin with, we can ask the question: what is missing in AI? Pedro Domingos, one the most important AI researchers, posed this very question in 2006. Domingos claimed that AI, as a field, was missing an interface layer that enables the separation between low- level work ( learning, inference) and high- level developments (Planning, NLP, Robotics). His proposal lies on the combination of first order logic and probabilistic graphical models.

Fifteen years have passed, and the field seems to be selecting neural networks ( and deep learning) as a possible interface layer paradigm.

Tensorflow and Pytorch are good candidate instances in that sense - and they have certainly enabled a good deal of high- level applications from non- initiates. Another relevant effort for providing an interface layer is AI4EU, the first European Artificial Intelligence On-Demand Platform and Ecosystem, dedicated to providing faster innovation. Big techs such as Google and Microsoft probably lead the race to provide such an AI interface.

These proposals, however, are very limited in their scope.

They do not provide tools for planning, higher order explanation, reasoning ( reasoning about reasoning), and other important features of intelligence. They focus on machine learning, and they do so in a relatively shallow way. What do we have now? Tools for classification, regression, recommendation and segmentation. Important new additions to the ML pot include sequenceto- sequence inference and powerful representation learning.

Current machine learning (and therefore, AI) often depends on large quantities of labelled data and has little autonomy in that respect. When will we have AI systems that learn the concepts by themselves like babies do?

Perhaps that is the missing link of AI.

Currently, researchers are working on this problem by exploring good old transfer learning and reinforcement learning, few-shot, zero-shot and self-supervised learning, often with the aid of Generative Adversarial Networks - an inescapable trend.

Therefore, we can affirm that Machine Learning is missing cognitive depth - the ‘deep’ in ‘deep learning’ is a different kind of depth - and that AI, as a technology, is missing many things beyond machine learning. Systems require more than deep learning to be autonomous and, more importantly, to cooperate with us, humans. AI is very useful for machine translation, but it is not able to grasp the meaning of words.

In other words, it is not yet able to break free of Searle’s Chinese room.

We can define goals to an AI system (improve the profit of e-commerce), but we do not have systems capable of defining their own goals. To interact with us safely and productively, AI systems need to have robust models of humans, of our limitations, our abilities, our beliefs, and of our world in general.

An autonomous vehicle will do a better job not only if it can avoid obstacles that it can see, but if it can foresee the consequences of hitting said object (there may be a fatality and a family will be mourning).

AI systems may make very accurate predictions, but they must be able to explain, discuss and improve them, according to new criteria that may arise on the fly. That is to say, to reason and to negotiate.

Moreover, this must be done without dangerous experiments and with a concrete awareness of the consequences.

CONSCIOUSNESS AND EFFICIENCY

Will machines ever become conscious? That depends on the sharpness of our criteria. Researchers are working on causal reasoning and artificial moral agents. In the near future, an AI system may write great news articles from data and from observation, but it will never be the best option for choosing an editorial line.

In journalism, in law, in medicine, in war, in politics and in many other areas of society, only humans can decide what is best for humans. However, AI systems can benefit from ethical and moral layers that may minimise the risk of being a threat to our safety and wellbeing.

There is still a lot to do in the intersection of AI with cognitive science and neuroscience in that respect.

Social sciences such as philosophy, psychology and sociology developed a complex body of knowledge about people and society that has a lot to offer to AI endeavours.

Artificial intelligence is also missing efficiency. As said above, ML approaches require vast amounts of data and the training of large models, in an intensive manner.

Computations are highly energy demanding. Tuning a very large neural language model with hundreds of millions of parameters may emit more CO2 than a car in its lifetime. Although this cost may be mitigated by frequent, much cheaper than training use, (e.g. in a chatbot), the total amount of energy and CO2 emission of AI may not be sustainable. We need hardware that is more efficient. Edge AI promotes energy savings and requires consideration. Going a few steps further, there is room for new paradigms such as neuromorphic chips.

From an engineering point of view, and despite current market offers, AI is missing easy-toassemble powerful pipeline builders that include more than machine learning, so people can deploy them easily and efficiently in all types of devices - from large systems to sensors.

Said pipelines would combine different intelligence features to enable easy linking of sensorial information to the vast amount of data on the web, model the humans’ interaction with the AI system, and predict consequences of human and AI proposed actions.

A DEEPER AND MORE HUMANE AI

In summary, from what the community currently misses in AI, near future developments will most likely bring deeper, more cognitive, autonomous and humane AI. AI systems will have to be more energy efficient (greener) and robust to hostile environments. And we will probably have access to an interface layer that enables fast development and deployment of more-than-deep-learning AI solutions.

We are taking extraordinary steps in complementary directions: learning, reasoning, planning, computer vision, natural language processing, etc. What do we need to make everything work together seamlessly? Do researchers need to invest a lot more in artificial consciousness?

Current systems are very far from that point: the “sentience or awareness of internal and external existence”, and we know very little about that! ∎

Joao Gama

Alipio Jorge

This article is from: