Page 135

DARPA IMAGE

not all, pictures of refugees contain crowds of people, but so do pictures of sporting events, urban areas, and nightclubs. And yet we humans have no problem distinguishing refugees from football fans. A lot of our knowledge is hard to express in words. Computers became powerful enough to run neural networks in the 1980s, but the networks couldn’t be very large, and training was almost as much work as writing rules, since humans have to label each element of the training set. In 1985, DARPA funded two teams under the Autonomous Land Vehicle program to develop self-driving cars. Both teams used neural networks to enable their vehicles to recognize the edges of the road. However, the systems were easily confused by leaves or muddy tire tracks on the road, because the hardware available at the time was not powerful enough. Nonetheless, the program established the scientific and engineering foundations of autonomous vehicles, and some of the researchers went on to NASA to develop the Mars rovers Sojourner, Spirit, and Opportunity. All of these autonomous vehicles operated far longer than specified in their mission plans. In 2004, DARPA issued a Grand Challenge, with a $1 million prize awarded to the first autonomous vehicle to cross the finish line of a 142-mile offroad course in the California desert near Barstow. The most capable vehicle traveled less than 8 miles before getting stuck. In 2005, DARPA repeated the challenge on a slightly shorter but more difficult course, and this time five vehicles crossed the finish line. The teams that developed those vehicles used neural networks to enable better detection of the track and to distinguish obstacles such as boulders from shadows. Many of those researchers went on to develop self-driving car technologies for Google, Uber, and other car manufacturers. Now AI seems to be everywhere. Over the past few years, AI constantly has been in the news, due to rapid advances in face recognition, speech understanding, and self-driving cars. Oddly enough, this wave of rapid progress came about largely because teenagers were eager to play highly realistic video games. Video consists of a series of still pictures flashed on a screen, one after the other, to create the illusion of motion. Realistic video requires the creation and display of lots of high-definition pictures, because they must be displayed at a ferocious rate of at least 60 pictures per second. Video screens consist of a dense rectangular array of tiny dots,

As AI applications become more common, the current limitations of the technology become more apparent. In particular, machine-learning systems cannot explain their outputs. To address these issues, DARPA is running a program called Explainable AI to develop systems that can produce accurate explanations at the right level for a user. Systems that can explain themselves will enable more effective human/machine partnerships.

called pixels. Each pixel can light up in any one of more than 16 million colors. The processors underlying fast-action video games need to create a constant stream of pictures based on the actions of the player and shunt them onto the screen in quick succession. Enter the graphics processing unit, or GPU, a computer chip specifically designed for this task. GPUs rapidly process large arrays of numbers that represent the colors of the pixels comprising each picture. In 2009, NVIDIA released powerful new GPUs, and researchers soon discovered that these chips are ideal for training neural networks. This enabled the training of deep neural networks that consist of dozens of layers of neurons. Researchers applied algorithms invented in the 1980s and 1990s to create powerful pattern recognizers. They discovered that the initial layers of a deep network could recognize small features, such as edges, enabling subsequent layers to recognize larger features such as eyes, noses, hubcaps, or fenders. Providing more training data makes all neural networks better, up to a point. Deep neural networks can make use of more data to improve their recognition accuracy well past the point at which other approaches cease improving. This superior performance has made deep networks the mainstay of the current wave of AI applications. Along with GPUs and clever algorithms, the internet has enabled the collection and labeling of the vast amounts of data required to train deep neural networks. Before automated face recognition was possible, Facebook provided tools for users to label their pictures. Crowdsourcing websites recruit inexpensive labor that AI companies can tap to label pictures. The resulting abundance of training data makes it seem that AI systems with superhuman abilities are about to take over the world.

131

Profile for Faircount Media Group

DARPA: Defense Advanced Research Projects Agency 1958-2018  

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded