Ultimately, people are still far more effective learners than machines. We can learn from teachers, books, observation, and experience. We can quickly apply what we’ve learned to new situations, and we learn constantly in daily life. We can also explain our actions, which can be quite helpful during the learning process. In contrast, deep learning systems do all their learning in a training phase, which must be complete before they can reliably recognize things in the world. Trying to learn while doing can create catastrophic forgetting, as backpropagation makes wholesale changes to the link weights between the nodes of the neural network. DARPA’s Lifelong Learning Machines program is exploring ways to enable machines to learn while doing without catastrophic forgetting. Such a capability would enable systems to improve on the fly, recover from surprises, and keep them from drifting out of sync with the world. The knowledge of a trained neural network is contained in the thousands of weights on its links. This encoding prevents neural networks from explaining their results in any meaningful way. DARPA is currently running a program called Explainable AI to develop new machine-learning architectures that can produce accurate explanations of their decisions in a form that makes sense to humans. As AI algorithms become more widely used, reasonable self-explanation will help users understand how these systems work, and how much to trust them in various situations. The real breakthrough for artificial intelligence will come when researchers figure out a way to learn or otherwise acquire common sense. Without common sense, AI systems will be powerful but limited tools that require human inputs to function. With common sense, an AI could become a partner in problem-solving. Current AI systems today seem superhuman because they can do complex reasoning quickly in narrow specialties. This creates an illusion that they are smarter and more capable than they really are. For example, you can use an internet search engine to find pictures of cats sitting on suitcases. However, no current AI can use the picture to determine if the cat will fit in the suitcase. To the AI, the thing we recognize as a furry animal that purrs, uses a litter box, and ruins the single most expensive piece of furniture in the house with its needle-sharp claws is just a fuzzy two-dimensional texture. The AI has no conception of a three-dimensional cat. What would happen if somebody put a cat in a suitcase? Suffocation as a possibility leaps to mind, because we learn while growing up to consider the likely consequences of our actions. We also know that a plush toy cat can go in an airtight suitcase with no ill effect, because we categorize things according to their properties, such as being alive, not just by their appearance (in this case, an inanimate toy that looks just like a real cat). No AI system today can do this type of reasoning, which draws on the immense amount of commonsense knowledge we accumulate over our lifetimes. Commonsense knowledge is so pervasive in our lives that it can be hard to recognize. For example, most people could easily sort through pictures of furniture to find black couches with white legs. As an experiment, try to find such pictures on the internet. In 2018, at least, your search results will contain mostly pictures of black couches and white couches, with pictures of various other couches thrown in just in case. The color of the legs will be whatever is in fashion, because we don’t yet understand how to create AI systems that can figure out the parts of objects. AI systems with common sense ultimately could become partners in problem-solving, rather than just tools. For example, in emergency situa-
tions, people tend to make snap decisions about the cause of the problem and ignore evidence that doesn’t support their point of view. The cause of the Three Mile Island accident was a stuck-open valve that allowed cooling water to escape from the reactor containment vessel. The heat of the reactor caused the remaining water to turn to steam, which increased the vessel pressure. The operators decided that the high pressure meant that there was too much water, and made the situation worse by overriding the automatic emergency cooling system. An AI that could understand control-room conversations and vet them against its own models of reactor operation might be able to suggest alternative possibilities before the human operators commit to a particular course of action. To act as a valued partner in such situations, the AI system will need sufficient common sense to know when to speak and what to say, which will require that it have a good idea of what each person in the control room knows. Interrupting to state the obvious would quickly result in its deactivation, particularly under stressful conditions. DARPA is mounting a major initiative to create the next generation of AI technologies, building on its five decades of AI-technology creation to define and to shape what comes next. DARPA’s substantial AI R&D investments will increase to fund efforts in the following areas: New Capabilities. DARPA programs routinely apply AI technologies to diverse problems, including real-time analysis of sophisticated cyber attacks, detection of fraudulent imagery, human language understanding, biomedical advances, and control of prosthetic limbs. DARPA will advance AI technologies to enable automation of complex business processes, such as the lengthy and painstaking accreditation of software systems required for aviation, critical infrastructure, and military systems. Automating this accreditation process with known AI and other technologies now appears possible, and would enable deployment of safer technologies in less time. Robust AI. As noted above, the failure modes of AI technologies are poorly understood. The data used to train such systems can be corrupted. The software itself is vulnerable to cyber attacks. DARPA is working to address these shortfalls by developing new theoretical frameworks, backed by extensive experimental evidence, to fortify the AI tools we do develop. High-Performance AI. In combination with large data sets and software libraries, improvements in computer performance over the last decade have enabled the success of machine learning. More performance at lower electrical power is essential to allow this use of AI for data-center applications and for tactical deployments. DARPA has demonstrated analog processing of AI algorithms that operate a thousand times faster using a thousand times less power compared to state-of-the-art digital processors. New research will investigate AI-specific hardware designs and address the inefficiency of machine learning by drastically reducing requirements for labeled training data. Next-Generation AI. DARPA has taken the lead in pioneering research to develop the next generation of AI algorithms, which will transform computers from tools into problem-solving partners. New research will enable AI systems to acquire and reason with commonsense knowledge. DARPA R&D produced the first AI successes, such as expert systems and search utilities, and more recently has advanced machine-learning tools and hardware. DARPA is now creating the next wave of AI technologies that will enable the United States to maintain its technological edge in this critical area.