34 minute read

AI and Robotics in the Twenty-First Century A Tsunami Without a Safety Net

(Martha’s Vineyard Men’s Group, August 29, 2018)

In an 1883 essay entItled the rIght to Be lazy, paul lafargue, a French Marxist who was married to Karl Marx’s daughter Laura, wrote clairvoyantly but somewhat prematurely and naively:

Advertisement

Our machines, with breath of fire, with limbs of unwearying steel, with fruitfulness, wonderful inexhaustible, accomplish . . . with docility their sacred labor. And nevertheless the genius of the great philosophers of capitalism remains dominated by the prejudice of the wage system, worst of slaveries. They do not yet understand that the machine is the saviour of humanity, the god who shall redeem man from the sordidae artes [dirty work] and from working for hire, the god who shall give him leisure and liberty.

It seems clear that those nineteenth- and twentieth-century machines that Lafargue extolled never succeeded in meeting his

hopes and expectations of emancipating man from the drudgery of toil. Apparently, the owners of those machines had other outcomes in mind. But now we appear to be entering a new period of advanced mechanization and automation, propelled by artificial intelligence and robotics, which offers some renewed hope of fulfilling Lafargue’s dream. In this essay I propose briefly to examine these developments in an effort to explore and understand these new possibilities, and what man might make of them, or they might make of him.

AI

Artificial intelligence (AI) has a history that may be familiar to some of you. It was the subject of two movies, namely The Imitation Game (2014) and Breaking the Code (1996), both of which deal with the life of an Englishman, Alan Turing, and his achievement of breaking the Nazi “Enigma” code during World War II. He did this by developing a machine called the Bombe at Bletchley Park, a secret British cryptography intelligence installation near London. Turing’s machine proved itself capable of generating alternative settings that decoded changing German secret messages and codes. His work is credited with ending World War II two to four years early. Turing completed his PhD thesis in 1938 at Princeton, entitled “Systems of Logic Based on Ordinals.” Later, in 1950, he published a paper entitled “Computing Machinery and Intelligence,” which addressed the question “Can machines think?” Today his “Turing test” is intended to determine whether humans and AI are, or are not, distinguishable. At a recent AI conference, the Turing test seems to have been passed when an AI-driven Google robotic assistant named Duplex booked restaurant and hair salon reservations for unsuspecting callers. The robot threw in a few random “ums,” “ahhs,” and other verbal fillers to lend itself credibility. So, soon there may be fewer shouts of

“Agent” by callers like me, when we are speaking to robots, like our friend Julie on Amtrak.

Tragically, as both Turing films relate, Turing was charged criminally in England in 1952 with “gross indecency” for being gay. He avoided prison by agreeing to inhuman hormone treatments. He died at age forty-one in 1954 of cyanide poisoning, which was ruled a suicide. Turing is generally considered to be the father of AI. He has been honored posthumously by the Crown.

A more recent demonstration of AI occurred in the 1990s, when an IBM supercomputer known as Deep Blue played a pair of chess matches with world chess champion Garry Kasparov. The first match was won by Kasparov in 1996 and the second by Deep Blue in 1997. A documentary entitled The Man vs. The Machine (2014, available online) describes the 1997 match. Deep Blue’s victory was seen as a demonstration that AI was surpassing human intelligence, in that it could defeat a human world chess champion. Kasparov is today a strong proponent of AI, and in 2017 he wrote a book entitled Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. He recently remarked that “today, you can buy a chess engine for your laptop that will beat Deep Blue quite easily.” Kasparov also commented that the reason that machines can outplay humans in chess is that humans are, simply put, humans. They are prone to mistakes, becoming tired, and coping with pressures and emotions, while machines are being produced that are much closer to “perfection.” And machines can absorb, manipulate, and store millions of times more information than humans can. Humans die, taking much of what they knew with them, while machines live on, virtually forever.

Think about comparing human drivers of cars and trucks with self-driving vehicles. Of course, mistakes by the latter (especially those causing death, as occurred in Tempe, Arizona, in March of this year) at first blush will not be tolerated, while human driving errors are taken for granted and, while we try to limit them, are acceptable.

Consider that in 2017, there were 40,100 traffic fatalities in the U.S., when our population was 325 million—or more than 100 deaths each day. The highest number ever of such fatalities was 54,589 in 1972, when our population was 209 million—or 150 each day. And it is estimated that nine out of ten of such fatalities were due to human rather than mechanical error. So there has been substantial improvement, demonstrated by a decline of deaths over the years. But we don’t yet know what comparable figures might be if and when selfdriving vehicles are universal, with human error eliminated. Time will tell, or maybe AI will. Meanwhile, companies such as Lyft, Uber, and Waymo, as well as automakers such as GM and Toyota, are spending billions on developing self-driving vehicles.

On this very point, a recent RAND Corporation study entitled “The Enemy of Good: Estimating the Cost of Waiting for Nearly Perfect Automated Vehicles” is highly instructive. RAND asked the question, “How safe should highly automated vehicles (HAVs) be before they are allowed on the road for consumer use?” RAND compared situations in which such vehicles were fully deployed when their performance was 10 percent better than the average human driver, with a model where performance was 75–90 percent better. It concluded that both in the short and long runs, more lives would be saved, and injuries avoided, in the tens and hundreds of thousands, by deployment in the former case—i.e., 10 percent better, rather than waiting until 75–90 percent better has been achieved. RAND, nevertheless, concluded as follows:

Deploying HAVs when their safety performance is just better than that of the average human driver may be too permissive given social expectations about the safety of robots, machines, and other automated systems, but waiting for improvements many times over or waiting for perfection may be too costly. Instead, a middle ground of HAV performance requirements may prove to save the most lives overall.

This is the type of question that it will be difficult for policy makers to resolve in the future, based in part on public opinion and public tolerance.

As to another AI system, in 2011 IBM’s question-answering computer, Watson, competed on the TV quiz show Jeopardy! against two legendary Jeopardy! champions. Watson won the firstplace prize of $1 million, which was donated to charity. IBM announced in 2013 that Watson’s software system would be used for management decisions at Memorial Sloan Kettering Cancer Center in New York City in connection with lung cancer treatment. And many other innovative uses of Watson already are being employed in other areas worldwide. Today, companies such as Macy’s, Staples, GEICO, and H&R Block are using Watson technology for various purposes.

In the case of H&R Block, Watson is now assisting the company’s 70,000 tax professionals at its 10,000 branches in the United States, where 11 million people file their tax returns. Block had to input vast amounts of text and data, including 74,000 pages of Internal Revenue Code provisions and regulations, as well as vast amounts of Block data accumulated over six decades. Some 60 percent of the 140 million Americans who file federal income tax returns presently seek help in doing so. Tax preparation is a $200 billion annual industry. H&R Block has a 75 percent retention rate of clients, which, with Watson’s help, it hopes to increase. Some of you may recall, however, that Donald Trump boasted during his presidential campaign that his tax reforms would put H&R Block out of business. In fact, however, from what I’ve read about the new tax law, if you can get your return onto the promised “postcard,” there will be plenty of worksheets required to fill it out. H&R Block appears unconcerned.

I already have described the success of computers in defeating a great chess champion. More recently, success was achieved by a computer program named AlphaGo, which consistently defeated

champions in the complex Asian game of Go. But chess and Go are games that provide perfect information to the computer so that it is able to know and anticipate all of the millions of possible moves of its opponents.

But in 2017, a more far-reaching AI achievement was attained when an AI application, or algorithm, known as Deep Stack defeated eleven professional poker players in Texas Hold’em poker. The reason that this accomplishment is so significant is that poker is the quintessential game of imperfect information because the computer does not know which hole cards its opponents are hiding. And it is a game in which there are more unique situations than there are atoms in the universe. Deep Stack relies upon intuition, which needs to be trained and retrained during each phase of a game. So despite the bluffing and small tactics of deception in Texas Hold’em, the computer came out the ultimate winner in over forty-four thousand poker hands, involving eleven pros.

In fact, human decision makers similarly must make decisions without perfect information all the time, and they must rely upon hunches and other “unscientific” methods to make up for such lack of information. Thus Deep Stack’s poker achievement augurs well in assisting humans to make better decisions of all kinds in the future, even without complete information. This is the difference between inductive and deductive reasoning.

Robots

Next, let’s look at robots for a moment. When I was in junior high school back in Brooklyn in 1948, one of my precious, and prescient, social studies teachers had us read a play by the great Czech writer Karel Čapek, entitled R.U.R.: Rossum’s Universal Robots. It was written ninety-eight years ago, in 1920. Čapek was an important artistic and literary figure in Czechoslovakia between the wars as well as an anti-communist and anti-fascist liberal political leader who was

close to the great Czech leader Jan Masaryk. I reread the play for the first time in seventy years for the purposes of this talk.

Čapek’s play is remarkable in many ways, one of which is that it is responsible for the origin of the word robot. Čapek attributed his use of the word to his brother Josef, an artist and writer, who suggested it to him. The term derives from the Czech word robota, meaning “forced labor” or “drudgery” or “servitude,” which is related to the work that had been performed by serfs for their lords during feudalism.

The idea of an artificial human being has ancient roots. Even in Greek mythology, as well as in ancient Egypt, China, and Renaissance Italy, the idea of a so-called automaton was known. Da Vinci sketched a plan for a humanoid mechanical knight in 1495. And Mary Shelley’s and Mel Brooks’s Frankenstein monsters, as well as S. Ansky’s Dybuk, are even more familiar.

When in 1940, at age five, I attended the New York World’s Fair in Flushing Meadow Park, I saw the Westinghouse robot, Elektro, which was seven feet tall, weighed 265 pounds, and could walk by word command, speak seven hundred words, smoke cigarettes, blow up balloons, and move its head and arms. It was sort of a huge Wizard of Oz Tin Woodman, played by Jack Haley. (Remember Harold Arlen’s “If I Only Had a Heart” from the movie that opened in 1939.) Indeed, the Čapek play ends with a male and a female robot falling in love.

The play R.U.R. involves a country where lifelike robots have been invented and are being mass-produced to replace human workers. The “rub” in the play comes when there is a revolt by the soulless robots, who plan to wipe out humanity and take over the world. Here’s a quote from one of the robot promoters and producers in R.U.R. about robotization:

Within the next ten years Rossum’s Universal Robots will produce so much wheat, so much cloth, so much everything that things will no longer have any value. Everyone

will be able to take as much as he needs. There’ll be no more poverty. Yes, people will be out of work, but by then there’ll be no work left to be done. Everything will be done by living machines. People will do only what they enjoy. They will live only to perfect themselves . . . [T]hen the subjugation of man by man and the enslavement of man by matter will cease. Never again will anyone pay for his bread with hatred and his life. There’ll be no more laborers, no more secretaries. No one will have to mine coal or slave over someone else’s machines. No longer will man need to destroy his soul doing work that he hates.

Karl Marx and Paul Lafargue, take note.82

A more recent projection of a robotized future is provided by Paul Dumouchel and Luisa Damiano, as declared in their recent book that describes the advantages of robots over human workers:83

Unlike human workers, robots do not become tired (although they do sometimes break down); they do not complain; they are never distracted in the course of performing their duties; they do not go on strike; they never have a hangover on Monday morning. . . . [They] cost less. They are often more efficient and more precise than human workers. They have no need for retirement plans, health insurance, or legal rights. We want robots to have all the useful qualities that masters insist upon in their slaves, bosses in their employees, commanders in their soldiers; and we want them to have none of the weaknesses, none of the failings, and, above all, nothing of that irrepressible tendency to willful insubordination and independence of mind that is found in almost all human workers.

82. This year celebrates Marx’s two-hundredth birthday.

83. See their Living with Robots (Cambridge, MA: Harvard University Press, 2017).

A not unexpected additional development in robotics today are sex robots, both male and female. Think of one of those attractive department store mannequins coming “alive,” with moving parts, as a companion and playmate. Your fertile, and well-developed, male imaginations, aided by a visit to the “sex robot department” at Google, will provide you with all the details, including pricing. Think of Ovid’s Pygmalion, who carved a statue of a beautiful woman who thereafter came to life.84

I have added as an appendix to this essay a listing of YouTube entries where you can watch robots at work and play, doing such things as surgery, producing cars, laying brick, milking cows, moving packages in warehouses, running on two and four legs, playing soccer, etc.

The Past

Despite their novelty, both today’s AI and robotics can be seen as continuing developments in ideas and practices that have their antecedents in industrial life going back to the late nineteenth century. It was then that Frederick Winslow Taylor began to apply theories and practices of industrial management that emphasized efficiency and increasing productivity from workers. Scientific management is often still referred to as “Taylorism.” In 1913, Vladimir Ilyich Lenin described Taylorism as a “‘scientific’ system of sweating” more work from laborers. Of course, thereafter, the Soviet Union, under both Lenin and, later, Stalin, embraced Taylorism warmly, as well as even much harsher methods of industrial compulsion. In a speech given in 1919, after seizing power in Russia, Lenin declared: “The possibility of socialism will be determined by our success in combining Soviet rule and the Soviet

84. On the other hand, see Sherry Turkle, “There Will Never Be an Age of Artificial Intimacy,” The New York Times, August 11, 2018.

organization of management with the latest progressive measures of capitalism . . . [including] the study and teaching of the new Taylor System and its systematic trial and adoption.”85

Of Taylor, sociologist Daniel Bell, in his classic 1956 essay, Work and Its Discontents: The Cult of Efficiency in America, said: “He couldn’t stand to see an idle lathe or an idle man. He never loafed and he’d be damned if anyone else would.”86

Actually, the term scientific management was popularized in 1910 by Louis Brandeis, then still a crusading Boston attorney (he went on to the Supreme Court in 1916), who argued before the Interstate Commerce Commission that the railroads should not be permitted to raise rates in the face of their operational inefficiencies and that they could instead increase profits through adopting scientific management. On the other hand, as to his employer clients, Brandeis urged them to adopt scientific management efficiencies, but he did so while also urging them to share the resulting increased profits with their employees. In addition, he sought to achieve year-round employment for workers instead of their being subjected to seasonal layoffs, and he was a strong advocate of worker and union rights.

According to Taylor, workers who performed repetitive tasks tended to work at the slowest rate that would go unpunished. This he called “soldiering,” which is still used today to describe “goldbricking,” malingering, goofing off, shirking, or slacking. Taylor described “soldiering” as “the greatest evil with which the working people . . . are now afflicted.” I am reminded of the song “Oh, How I Hate to Get Up in the Morning” in Irving Berlin’s musical This Is the Army, which goes “and then I’ll get that other pup, the guy who wakes the bugler up, and spend the rest of my life in bed.”

In addition to Taylor, there was the work of his rival, Frank

85. Richard G. Olson, Scientism and Technocracy in the Twentieth Century: The Legacy of Scientific Management (Lanham, MD: Lexington Books, 2016), 62.

86. Bell, p. 6.

B. Gilbreth, along with his wife, Lillian Moller Gilbreth, who specialized in time-motion studies of workers’ movements, designed to introduce plant and worksite efficiencies. (You may remember the Gilbreths as the parents of a large and very undisciplined New Jersey family in the book, and later film, Cheaper by the Dozen.)

One of the methods of addressing the perceived slow and inefficient speed of work was the assembly line and the “speedup.” Think of Charlie Chaplin engaged in repetitive work on the production line in his movie Modern Times. And today the assembly line in chicken-processing plants in this country, which employ a quarter million workers in 174 plants, is far worse than that faced by Chaplin. Each year, about 100 workers die, and there are 300,000 work injuries in the domestic poultry industry. As put by Debbie Berkowitz, who was with President Obama’s Labor Department:

In my work at the Occupational Safety and Health Administration, I witnessed the dangers: poultry workers stand shoulder to shoulder on both sides of long conveyor belts, most using scissors or knives, in cold, damp, loud conditions, making the same forceful movements thousands upon thousands of times a day, as they skin, pull, cut, debone, and pack the chickens. The typical plant processes 180,000 birds a day. A typical worker handles 40 birds a minute.

In September 2017, the poultry industry’s National Chicken Council petitioned the U.S. Department of Agriculture to eliminate the maximum chicken processing speed of 140 birds a minute on the ground that foreign competitors process chickens at more than 200 a minute. In January of this year, the NCC petition surprisingly was even denied by the Trump Agriculture Department. At present there is a similar proposal pending to speed up hog processing in plants.

But, as usual, I digress.

Predictions

I have suggested earlier that one aspect of AI is dramatically improved predictability of outcomes.87 Regrettably, however, AI is not particularly useful at predicting its own impact upon the world, which remains highly speculative and unpredictable. Nevertheless, a number of mavens have made predictions about the future impact of AI. Let me share a few with you:

• Alan Turing predicted that machines will “outstrip our feeble powers” and “take control.” • Sundar Pichai, the CEO of Google, has predicted that AI will have a “more profound impact on society than fire or electricity.” • Vladimir Putin recently told Russian schoolchildren that “the future belongs to artificial intelligence” and that “whoever becomes the leader in this sphere will become the ruler of the world.” • And former Treasury Secretary Lawrence Summers predicted that by 2050, “we may have a third of men [in the U.S.] between the ages of 25 and 54 out of work.”

The McKinsey Global Institute predicted that by 2030, up to 375 million people, which is 14 percent of the global workforce, may have their jobs automated out of existence by AI and robotics. This may not include automating a part of a job, as in the relationship of ATMs to the job of a bank teller. Indeed, the two may presently be doing much of the same job, but there is no question that ATMs have reduced the need for tellers. You might be interested to know that there are about half a million bank tellers in the

87. See Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (Boston: Harvard Business Review Press, 2018).

United States today whose mean hourly wage is $13.89 and whose mean annual wage is $28,880.

Indeed, it’s hard to realize that ATMs turned fifty last year. In fact, however, this year banks are rolling out a new generation of ATMs with larger, digitally enabled screens akin to tablets, which will provide almost all the services human tellers provide, as well as other new technology, such as connecting ATMs to iPhones.

JPMorgan Chase has more than 16,300 ATMs and 5,300 branches in the U.S., with 45 million digital users, many of whom use live tellers as well. Bank of America has 15,900 ATMs and 4,600 branches. Its new ATM will be called XATM, or “Extreme ATM,” at least for a while. We shall see how many tellers survive the XATMs.

Please note, too, that McKinsey was not predicting how many jobs would be created by AI and robotics. But many will, and are, being created rapidly in programming, data science, equipment production, installation, and maintenance. Indeed, at the moment, there is a scarcity of people to fill these jobs, with more available jobs than available people to fill them.

McKinsey also predicted that about 32 percent of today’s U.S. workforce of 166 million will have to leave their present occupational categories over the next twelve years. And they will be occupying many jobs that did not exist before. Indeed, since 1980, before the advent of the personal computer and the internet, PCs have created close to 20 million jobs.

Despite its achievements, critics of AI abound. Some say that AI has yet to master the understanding and the determining of cause and effect. Where all the facts and possibilities are present, as in chess, there appear to be no mistakes, but where they are not, mistakes are bound to occur, say these critics.88 And others, including

88. See Judea Pearl and Dana Mackenzie, “AI Can’t Reason Why,” Wall Street Journal, May 18, 2018, and Gary Marcus and Ernest Davis, “A.I. Is Harder Than You Think,” The New York Times, May 18, 2018.

Henry Kissinger,89 argue that while AI may have an advantage over man in absorbing vast amounts of data and reaching conclusions therefrom, it lacks the human ability to learn and apply history, social science, philosophy, ethics, and human ingenuity, experience, and judgment to resolving problems.

Still others argue that AI discriminates against minorities, women, and the poor because the data that it analyzes is skewed to begin with. To put it in an old formulation, “garbage in, garbage out.”90

There is hardly an industry or country today in which AI and robotics are not being employed and are having an impact. Education, government, warfare, agriculture, science, law enforcement, manufacturing, hospitality, transportation, communications, and health care are several broad categories in which AI and robotics are having a greater and greater influence. It would be impossible in the time permitted to examine them all, but I’ll just look briefly at health care as an example.

For the 7.6 billion people living on earth today (up from 3 billion in 1960 and 1.6 billion in 1900), in addition to clean air, water, sufficient nutritious food, education, clothing, and shelter, adequate available health care is an essential ingredient of a humane human existence. And, organized as we are politically into large and small nation states and smaller political units, and economically into vast global corporations, smaller commercial entities, and NGOs, the task of providing health care to the world’s people is falling more and more to a partnership between government and business, as well as with the actual human providers and deliverers of health

89. See Henry A. Kissinger, “How the Enlightenment Ends,” The Atlantic, June 2018.

90. See Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018) and Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018).

services, such as doctors, nurses, EMT personnel, aides, technicians, administrators, etc., who are becoming increasingly larger essential cogs in a huge worldwide health care wheel. Indeed, employment in health care in the United States is about 19 percent, exceeding both retail (10 percent) and manufacturing (8 percent) employment.

The World Health Organization tells us that in its seventy-year existence, human longevity has increased by twenty-five years, but also that more than half the world’s population presently lacks adequate health care. In this context, the introduction of AI and robotics to the multi-trillion-dollar global health care industry is presently in progress at an increasing rate.

As in many of the areas that we have discussed this summer, such as international trade and health care manpower, China, with its 1.4-billion population, is engaged in a huge effort to build its national health care system with the help of AI and robotics. Indeed, with its undemocratic political structure and economic controls, the Chinese government appears better able to move vast national resources in a direction that seeks to meet its people’s actual health care needs without having to accommodate itself to corporate and other political pressures, as appears to be the case in our country. Only last month in China, a nationally televised competition between highly trained doctors and a robotic doctor named BioMind was conducted to promote AI in Chinese medicine. The test related to the detection of brain tumors.

China’s medical system is deeply stressed and overloaded in the big cities and is weak, failing, corrupt, or nonexistent in rural areas. AI and robotics are seen by the Chinese government as a way to jump-start its highly inefficient health care system. So the government is throwing billions into this long-term effort.

BioMind was created as a joint venture by a prestigious Chinese hospital and a Singaporean tech company. The project began about eight months ago, when tens of thousands of medical images were fed into the robot. This was followed by a period of what is called

“deep learning.” Once BioMind was prepared, it took on twentyfive top specialists regarding their respective abilities to detect brain tumors. The competition was presented on national TV like a glitzy game show. BioMind bested the doctors handily in both of the competitive rounds. In round one, the robot answered 87 percent of the questions correctly, with the doctors scoring 66 percent. In round two, the score was 83 percent to 63 percent in favor of the robot.

I might add that it has been reported that a programmed robot in China achieved a score of 96 percent in a medical licensing examination. Do you think AI might cut down on those four years of medical school?

AI and robotization are spreading into many areas of medicine in China, such as diagnosis, surgery, and health monitoring through wearable devices. It should be mentioned that China’s worldwide leadership in medical technology is a matter that has deeply concerned the American government. Indeed, Trade Representative Robert Lighthizer so testified before Congress this year.

Another recent illustration of AI’s skill at diagnosis occurred in London, where researchers from Google’s DeepMind subsidiary, University College London Hospital (UCH), and a large British eye hospital developed software that was able to diagnose more than fifty eye ailments that equaled the diagnosis of eight ophthalmologists 94 percent of the time.

A major issue being confronted in such diagnostic areas is whether AI will be permitted to make medical decisions independently or whether doctor involvement will still be required. On this point, in April the FDA approved the first AI-powered program that makes clinical decisions without doctor intervention. The software is designed to detect an eye disorder known as diabetic retinopathy, which affects 30 million Americans. The software delivers a negative or positive result, without the necessity of a physician’s review.

This spring, the National Institutes of Health rolled out the All of Us Research Program, seeking to enlist 1 million volunteers from

all walks of American life and racial and ethnic groups who are willing to provide their medical records, have their genomes sequenced, and provide regular blood samples and vital signs. In addition, the volunteer’s physical activity and eating habits may be recorded and studied. This will be the largest health study ever conducted, creating a “biobank” of data designed to provide new information and insights about how to treat chronic illnesses and learn more about disease prevention. Many other governments and health care providers worldwide are engaged in similar studies, although of a much lesser magnitude.

Health Care and Employment

While technology promises to make providing health care more effective and efficient, the fact that there is a huge unmet need for care has caused experts to conclude that employment in what is being described as the “care sector” will continue to grow rather than decline in the years ahead. Further, the fact is that most jobs in the care sector require that those providing care and those being cared for must be in the same place at the same time—that is, doctor’s offices, hospitals, nursing homes, etc. These involve jobs that cannot be outsourced to foreign countries, as are call centers or manufacturing plants. To be sure, some administrative work, as in insurance processing, the reading of X-rays, or the manufacture of drugs and health equipment, may presently be outsourced, but for the most part, care employment is located at the point of providing the care. Further, while robots are being introduced in hospitals and elsewhere to do routine deliveries and other tasks, much of care work is unpredictable and is not amenable to robotization, although some efforts in this direction are underway as well. As a matter of fact, of the twenty jobs that the United Nations has declared to be least likely to be replaced by automation, fourteen of them are in the health care field.

And the workers needed to fill these jobs are in high demand worldwide. Mostly this is so in the low-wage end of such employment, especially insofar as care for the growing number of advanced-aged individuals throughout the world are concerned.

In this country, at the end of June of this year, there were 6.7 million job openings with close to 2 million in health-care-related positions. And there were 5.5 million unemployed persons looking for work. The reason for this discrepancy has many explanations, but a major one is that those looking for work do not have the skills to fill the positions involved, which are rapidly changing in job content and requirements, often because of changing technology. Thus improved training programs are essential if our workforce can be positioned to perform the newly created jobs that advanced technology is producing.91

I mentioned earlier that human error accounted for most of the more than 40,000 annual deaths from auto accidents. But many of you might be surprised to learn, as I was, that a recent Johns Hopkins study, led by Dr. Martin Makary of the Hopkins Medical School, concluded that more than a quarter of a million people in the United States die of medical mistakes annually and that such mistakes are the third-largest cause of death, after heart disease and cancer. Other studies have estimated the figure to be as high as 440,000. Of course, physicians, coroners, and medical examiners seldom list medical error as the cause of death. Dr. Makary defines a death due to medical error as one that is caused by inadequately trained staff, error in judgment of care, a system defect, or a preventable adverse effect. This includes computer breakdowns, mix-ups in medications administered, and undiagnosed surgical complications. While there are many recommendations for remedying this

91. For more detailed examinations of work in the technological future, see Confronting Dystopia: The New Technological Revolution and the Future of Work, edited by Eva Paus (Ithaca, NY: ILR Press, 2018) and Darrell M. West, The Future of Work: Robots, AI, and Automation (Washington, DC: Brookings, 2018).

situation, the advent of better medical record keeping, patient involvement, and the application of artificial intelligence are seen as important additions to coping with this serious problem.

from what I have just described, it seems clear that AI and robotics, despite their incredible benefits, have the potential for causing substantial disruption to the American economy—and steady and stable employment, in particular. We know how devastating and disruptive the movement of our economy from farming to industrialization was between 1880 and 1940. And Joe Bowers described for us a few weeks ago the severe impact of the economic meltdown of 2008/2010. So the final question I propose to look at is how prepared are we to cope with the potential adverse impact of the new technological revolution that AI and robotics portend.

And for this I go back to my talk here a few years ago upon the publication in 2014 of Thomas Piketty’s Capital in the Twenty-First Century, in which he described the incredible process of the smallest one-tenth of 1 percent of the population’s increase in both wealth and power and the corresponding impoverishment of the lower half. Over the last several years, the inequality has only become worse. Indeed, the bottom line is that the ability of a society to withstand broad economic disruption depends on the economic resources available to the many to cushion the blows, including its safety nets, as well as government leaders prepared to deal with the new realities. A couple of us may remember the breadlines of the Depression and maybe even FDR’s second inaugural address in 1937, in which he observed “one-third of a nation, ill housed, illclad, ill-nourished.” It was in that speech in which he said that we needed “to find through government the instrument of our united purpose to solve for the individual the ever-rising problems of a complex civilization. . . . For, without that aid,” we were, he said, “unable to create those moral controls over the services of science

which are necessary to make science a useful servant instead of a ruthless master of mankind. To do this” he continued, we know “that we must find practical controls over blind economic forces and blindly selfish men.”

Sounds like an appropriate prescription for our present day as well.

As for the continuing growth in inequality, and the increased impoverishment of most of the people in this country, let me relate the findings of Matthew Stewart, in an article in the June issue of the Atlantic, entitled “The 9.9 Percent Is the New American Aristocracy.” Stewart concludes that people with a net worth of roughly $1.2 million or more are in the top 9.9 percent economically, and that about $10 million in assets puts one in the top 1 percent. Today, the top one-tenth of 1 percent, consisting of 160,000 households, owns 22 percent of the nation’s wealth, and the next 9.9 percent owns 55 percent. Thus, together the top 10 percent owns 77 percent, or over three-fourths of the nation’s wealth. And what about the other 90 percent? Today they own the remaining 23 percent of the wealth, but that’s down from 35 percent in the mid-1980s. And remarkably, all of their lost 12 percent went to the top one-tenth of 1 percent, with the next 9.9 percent just holding its own. Further, there is no sign that this trend of the upward movement of national assets will not continue. Indeed, in the last ten seconds, Jeff Bezos, the founder and owner of Amazon, made more money than the median employee of Amazon makes in an entire year. According to Time magazine, in the first four months of this year, Bezos’s wealth increased by $275 million each day, for a total increase in wealth of $33 billion in those four months.

Also, the compensation of the CEOs of top U.S. firms has skyrocketed in recent years so that in 2017, they averaged $18.9 million, compared with $62,000 for the typical worker in the same industry. This is a ratio of 312 to 1. Indeed, since 1978, CEO compensation has increased 1,070 percent, compared with just 11 percent for working people.

And when you consider the impact of the 2017 Republican tax law favoring corporations and the rich, the disparity between the rich and everyone else will be even more staggering in the years to come. The old neoliberal maxim about a rising tide lifting all boats does not appear to be holding water.

But you may be saying to yourselves at this point, “Unemployment is presently under 4 percent, so why should we assume that it will rise again to the 10 percent it hit in October 2009 or worse?” In fact, however, our problem today and in the future is and will be a crisis in the creation of good jobs with a reasonable amount of job security and health and retirement benefits. Indeed, today America’s typical worker earns around $44,500 a year, which is not much more than was earned in 1979, adjusted for inflation. Almost 80 percent of Americans say they live from paycheck to paycheck, and 40 percent say they could not raise $500 in cash for an emergency. To me, this is not the way most of the citizens of the richest country on earth ought to be living.

In response to all of the foregoing, many thinkers from the political Right, Center, and Left have proposed the idea of universal basic income (UBI) to ensure minimum income security for all. The proponents have included Milton Friedman, Richard Nixon, Dr. Martin Luther King Jr., Barack Obama in his Nelson Mandela Lecture in Johannesburg in July, and many others. Needless to say, a discussion of UBI would require a separate session here, but let me simply refer you to a recent New York Times book review by former Secretary of Labor Robert Reich, entitled “What If the Government Gave Everyone a Paycheck?” in which Reich discusses two of several recent books on this subject.92

In my view, one reason such a progressive idea will not come to pass is because the current generation of FDR’s “blindly selfish men” will not permit it. Indeed, one of them presently is the occupant of

92. See The New York Times Book Review, July 9, 2018, p. 1.

the White House. On why not, see Joseph Stiglitz’s review on page 1 of the August 26, 2018, New York Times Book Review of Anand Giridharadas’s new book, Winners Take All: The Elite Charade of Changing the World. I would be remiss if I didn’t plug the new book of my Pulitzer Prize–winning friend Steve Pearlstein, due out September 25, entitled Can American Capitalism Survive? Why Greed Is Not Good, Opportunity Is Not Equal, and Fairness Won’t Make Us Poor.

Finally, if I needed a closing line, it would be “Be prepared. Tsunamis sink almost all the boats.”

Appendix

The following videos, except for number 12, are all available on YouTube.

1. Meet SAM, the Bricklaying Robot (National Science Foundation) 2. SpotMini Autonomous Navigation (Boston Dynamics) 3. The Robots That Milk Cows (The Wall Street Journal) 4. BMW Car Factory: How It’s Made (GommeBlog.it) 5. Robotic Surgery Demonstration Using Da Vinci Surgical System 6. Robots: Top 10 Most Amazing Advanced Robots That Will Change Your

World

7. The Most Advanced Robots (UniversTechnology) 8. 10 Amazing Robots That Really Exist (Mad Lab) 9. 8 Advanced Robot Animals You Need to See (TechZone) 10. Five Robots That Are Changing Everything (BBC News) 11. Massive Robot Dance (Guinness World Records) 12. Do You Trust This Computer? (2018 documentary)

This article is from: