Milken Institute Review 1st Quarter 2016

Page 1

A Journal Of Economic Policy

The

Battle for

Cleveland PAGE 36


The Milken Institute Review • First Quarter 2016 volume 18, number 1 the milken institute Michael Milken, Chairman Michael L. Klowden, President and CEO

the milken institute review advisory board Robert J. Barro

publisher Conrad Kiechel

Jagdish Bhagwati

editor in chief Peter Passell

Daniel J. Dudek

art director Joannah Ralston, Insight Design www.insightdesignvt.com

Claudia D. Goldin

managing editor Larry Yu ISSN 1523-4282 Copyright 2016 The Milken Institute Santa Monica, California

George J. Borjas Georges de Menil Robert Hahn Robert E. Litan Burton G. Malkiel Van Doorn Ooms Paul R. Portney Stephen Ross Richard Sandor Isabel Sawhill

2016

Morton O. Schapiro John B. Shoven The Milken Institute Review is published quarterly by the Milken Institute to encourage discussion of current issues of public policy relating to economic growth, job creation and capital formation. Topics and authors are selected to represent a diversity of views. The opinions expressed are solely those of the authors and do not necessarily represent the views of the Institute.

Robert Solow

The Milken Institute’s mission is to improve lives around the world by advancing innovative economic and policy solutions that create jobs, widen access to capital and enhance health. Requests for additional copies should be sent directly to: The Milken Institute The Milken Institute Review 1250 Fourth Street, Second Floor Santa Monica, CA 90401-1353 310-570-4600 telephone 310-570-4627 fax info@milkeninstitute.org www.milkeninstitute.org

WE’LL SEE YOU IN 2016 Los Angeles, May 1-4, 2016

Cover: Hugh Syme

milkeninstitute.org | @milkeninstitute | #MIGlobal


contents

2 from the ceo 3 editor’s note 5 charticle

Asian surprise. by William H. Frey

8 trends

Water, water everywhere … by Lawrence Fisher

95 institute news 96 lists

page 26

15 ethiopia’s long, hard road

48 billionaires and growth

26 restoring fiscal democracy

56 financing high-risk

Growth is getting tougher. by Robert Looney

Dead policymakers rule. by Eugene Steuerle

36 saving airline deregulation Ryanair to the rescue? by Kenneth Button

Depends how you make it. by Sutirtha Bagchi and Jan Svejnar

medical research

Financial engineering can save lives. by Melissa Stevens

66 who pays for free parking?

Auto tyranny? by Eren Inci

page 56

75

book excerpt

The Rise and Fall of American Growth Robert Gordon explains why Silicon Valley won’t bail us out.

First Quarter 2016

1


from the ceo

Self-driving cars …  virtual reality headsets …  interactive gaming. So many cutting-edge technologies are linked to the cascade of innovation that powers the California economy – and benefits the world. Although the Milken Institute has offices in Washington, Singapore and now London, we are proudly rooted in California. It’s not just geographic happenstance (or the great weather). We like to think that the Golden State’s openness to ideas and change is part of the DNA we bring to the issues we work on. For instance, an Institute meeting years ago to brainstorm new approaches to spurring investment in the developing world included experts from the seemingly unlikely arena of film finance. Their fresh perspective pushed the group in novel – and productive – directions that would have otherwise eluded it. It’s just that kind of mash-up in tune with Cali­ fornia thinking that unlocks creativity across all our endeavors. Our work on California issues takes place through our California Center, which both nurtures research and convenes groups to share ideas. Our analysts have detailed how changes in state government policy could further spur the R&D activity that fuels the Cali­ fornia economy, while a separate study probed reforms needed to make the state’s own procurement of technology more effi-

2

The Milken Institute Review

cient. Our work on the issue of open data directly affected state legislation in 2015. And our research on the design and magnitude of financial incentives for the film industry helped frame the issues and to convince state policymakers of the value of retaining California’s leadership in this industry. We know our research is amplified when we share it with those who can transform ideas into action. To this end, our Californiafocused meetings run the gamut from informal get-togethers with policymakers and other stakeholders to our annual California Summit, where several hundred decision makers from government, business, academia and the media help shape the California Center’s agenda for the coming year. Meanwhile, in our partnership with public radio powerhouse KPCC the Center has created programming delving into the crucial question of whether the ballooning costs of education, housing and skilled labor are stunting the continuing Cali­fornian dream. The tide of innovation coming from the Golden State is not slowing – in fact, it appears to be accelerating. Moreover, the state seems to be expanding its leadership role in taking on a host of policy challenges facing the world. We look forward to being a part of this grand adventure.

Michael Klowden CEO and President


e d i t o r ’s n o t e

JG,our ever-curious correspondent from Passadumkeag, Maine, wonders why California grows more rice than Louisiana if the Golden State is really so short on

©anthony dunn/agstock images/corbis

water. Good question (as usual), JG. Some curmudgeons claim it’s because farmers pay pennies on the dollar for the water since they called dibs on the stuff – and because they’ve never met a legislature that couldn’t be persuaded to honor the water grab. I prefer to think of it as a case of the good ol’ fashioned American can-do spirit in action. It isn’t easy, after all, to grow a tropical monsoon crop in the desert. And besides, if they didn’t grow rice in California, we’d have to import it from India, Thailand, Vietnam, Pakistan, Brazil, Cambodia, Uruguay or Guyana. And then where would we be? But don’t bend your brain over this one, JG. Check out our truly terrific lineup in this quarter’s issue. Speaking of water, Larry Fisher, a former

New York Times business reporter, examines California’s flirtation with desalination, a technology long dismissed as somewhere between “iffy and Hail Mary,” he writes. “There’s no question that desalination plants are costly and have some impact on the environment, but that has been true of every water supply project since the Roman aqueducts.” The overriding objective, he suggests, should be a redesign of California’s water supply to manage large and growing risks of drought: “While it’s tempting to look to a tech fix like reverse-osmosis desalination as the answer, the real solution is a broad portfolio that includes conservation, reuse and smarter use.” Melissa Stevens, executive director of the Institute’s Center for Strategic Philanthropy,

First Quarter 2016

3


e d i to r ’s n ot e outlines a plan for leveraging philanthropic dollars to manage risk in the development of life-saving drugs. “The FasterCures Ventures model brings together investor classes with different interests and aligns disparate riskreward ratios for each to achieve both competitive private returns and high social returns,” she explains. “The idea is to mix and match three types of investors – market investors, pharmaceutical companies and venture philanthropists – all of which are stakeholders in developing new treatments.” Ken Button, an economist at George Mason University, explains that the task of airline deregulation is not yet finished. “On one hand,” he notes, “the giant legacy carriers that stumbled through the first decades of deregulation are finally covering their operating costs and apparently are better positioned to confront future economic storms. On the other, the consolidation that has made this possible is opening the door to anticompetitive behavior that could undermine consumers’ gains. “That’s where the removal of the last glaring vestige of economic regulation of airlines – the denial of ‘cabotage’ rights to foreign carriers that want to compete directly in the U.S. domestic airline market – fits in.” Sutirtha Bagchi (Villanova) and Jan Svej­ nar (Columbia) ask whether the extraordinary rise in extreme wealth in recent decades stimulates or undermines economic growth. “Billionaires who make their money through hard work, innovation (or, for that matter, luck) don’t have much impact on average productivity,” they conclude. “But those who make their money through political connections tend to reduce productivity because they typically prosper by virtue of monopoly power that distorts resource allocation,” and perhaps more important, “[because] they

4

The Milken Institute Review

have a strong interest in using their influence to retard innovation by potential competitors.” Bob Looney, an economist at the Naval Postgraduate School in California, takes a hard look at Ethiopia’s ballyhooed economic miracle. “Ethiopia’s government credits its impressive growth to the adoption of the ‘developmental state’ model of the East Asian tigers,” he writes. “This, on its face, is suspect, since development economists have long been skeptical that an East Asian strategy could be sustained in the teeth of ethnic heterogeneity, widespread corruption and rentseeking, and woefully deficient governance.” Gene Steuerle, an economist at the Urban Institute in Washington, explains how longdead policymakers haunt the federal budget. “Ever less is invested in our future, even as programs designed decades ago balloon in size and lose focus on the problems they were intended to solve,” he writes. “It has significantly weakened recent presidents and Congresses, and it will completely hobble future ones unless we confront the fiscal legacy that locks us into decline.” Eren Inci, an economist at Sabanci University in Istanbul, examines the consequences of free street parking on the quality of urban life. Free is not free, he points out: “One way or another, someone always pays, often indirectly, in the form of higher prices for something else. Moreover, the enormous amounts of land or structures needed for parking almost guarantee that mispricing parking spaces will have substantial consequences on economic efficiency and societal welfare.” Wait! Don’t touch that dial. This issue also includes an excerpt from Robert Gordon’s new book, The Rise and Fall of American Growth, as well as demographer Bill Frey’s analysis of Asian migration to the United States and a surprising glimpse at global savings patterns by yours truly. —Peter Passell


charticle

by william h. frey

For many Americans the term “immigration” in recent decades has been synonymous with hordes pouring across the Mexican border. But immigration in general – and especially from Mexico – came to a screeching halt when the job market collapsed in the Great Recession: the foreign-born population of the United

©ethel wolvovitz/the image works

States actually fell from 2007 to 2008. New Census numbers show immigration to be picking up again, but this time, Hispanics are no longer in the driver’s seat. Between 2013 and 2014, the foreign-born population increased by over one million for the first time since 2006, with Asian migrants accounting for over half that gain. In fact, Asian gains exceeded Latino gains for each of the years between 2010 and 2014. Not coincidently, this period also showed an uptick in the proportion of immigrants who graduated college, with arrivals from India, China and the Philippines leading the pack. Asian-born Americans are heavily concentrated in California and New York, but Texas has shown the second-greatest rise since 2010. Other tech-savvy states, including Virginia and Massachusetts, are also moving up the list. Mexico remains the origin of the largest number of U.S. foreign-born, outnumbering those from China or India five-fold. But in the post-recession surge, Mexico ranked just 62nd among all gaining countries, in part a consequence of slower upticks in jobs previously B I LL F R EY is a senior fellow at both the Milken Institute and the Brookings Institution, and author of Diversity Explosion: How New Racial Demographics Are Remaking America.

taken by Mexicans and of improved prospects at home. When, one must wonder, will the yahoo rhetoric on immigration adjust to this new reality?

First Quarter 2016

5


charticle

5.2%

364,019

0.6%

1.0%

6,069

7,274

2.6%

104,297

1.0%

1.0%

17,133

8,462

0.6% 3,394

5.2%

1.5%

147,162

28,371

1.5% 44,954

8.9%

2.0%

3,437,417

103,794

1.5% 28,371

3.7% 27,404

2.2%

145,309

1.0% 19,705

13.1% 191,402

3.0%

815,606

NET GAINS IN FOREIGN-BORN POPULATION, 2000-2014 2,000

THOUSANDS

1,500

1,000

20 0 20 001 20 0 20 102

0

20 0 20 203 20 0 20 304 20 0 20 405 20 0 20 506 20 0 20 607 20 0 20 708 20 0 20 809 20 0 20 910 20 1 20 011 20 1 20 112 20 1 20 213 20 1 20 314

500

source: U.S. Census Bureau American Community Survey data 2000-2009 and 2010-2014 with 2009-2010 value drawn from estimates by the Pew Research Center

6

The Milken Institute Review


ASIAN FOREIGN-BORN POPULATION BY STATE, 2014

MINORITY PERCENTAGE SHARE OF STATE POPULATION AND ACTUAL POPULATION

1.9% 25,437

1.2% 2.8%

0.7% 9,903

7,499

153,380

4.2%

1.6%

281,368

5.8%

89,792

1,144,823

2.0%

2.2%

201,137

1.7%

2.2%

51,682

3.5%

455,590

96,949

577,595

27,274

0.5%

4.3%

353,991

43,258

MINORITY PERCENTAGE SHARE OF STATE POPULATION

169,543

72,599

0.5-1.0%

1.0%

26,126

1.1-3.7%

47,499

0.6% 1.2%

4.4%

263,232

1.7%

1.1% 0.9%

6.5%

2.9%

1.0%

74,603

53,173

1.4%

165,176

1.5%

17,520

0.9% 41,854

23,131

109,072

281,929

8,943

1.2%

1.4%

3.0%

4.2-13.5%

2.5%

256,751

LARGEST NATIONAL FOREIGN-BORN POPULATIONS 2014

52,502

COUNTRY

1.9%

2014 SIZE (MILLIONS)

1 Mexico . . . . . . . . . . . . . . . 11.7

376,660

2 China . . . . . . . . . . . . . . . . . . 2.5

3 India . . . . . . . . . . . . . . . . . . . 2.2

4 Philippines . . . . . . . . . . . 1.9

COUNTRIES WITH GREATEST FOREIGN-BORN GAINS, 2010-2014

5 El Salvador . . . . . . . . . . . 1.3

El Salvador Dominican Republic Philippines China India 0

100,000

200,000

300,000

NUMBER OF PEOPLE

400,000

500,000

First Quarter 2016

7


trends

by lawrence fisher

There is a genre of survival story in which desperate shipwrecked sailors are reduced to drinking seawater. In the fifth year of an historic drought, the driest and hottest period ever measured in the state, California certainly qualifies as desperate. So perhaps it’s not surprising that San Diego County is about to bring online the largest seawater desalination plant in the country and that a slew of other desal projects in California are in the pipeline. The conventional wisdom about desalination has run from iffy to Hail Mary. Building a desal plant is a big, multiyear undertaking subject to all the political hurdles, legal challenges and cost overruns common to major infrastructure projects. Removing the salt from the water uses a lot of energy, which will certainly cost a lot, and, depending on the energy source, may contribute to the global climate change that, ironically, has exacerbated California’s water predicament. Moreover, sucking in seawater and disposing of the concentrated brine left from producing potable water may threaten marine life. Indeed, many environmentalists loathe desalination with a passion previously reserved for dams, strip mines and nuclear power. In its simplest form, desalination has been around since ancient times, when sailors would boil seawater and collect the water vapor as it condensed. In fact, such thermal desalination is still viable where there is a virtually free source of waste heat, like the boilL AWR E N C E F I S H E R writes about business for The New York Times and other publications.

8

The Milken Institute Review

ers of power plants and aircraft carriers. But most big desal projects today use a process called “reverse osmosis,” which involves pumping seawater at great pressure through a slightly porous membrane; it’s the same technology used in making de-alcoholized wine. And, as with many technologies, reverse osmosis’ efficiency has grown as the engineers improve the process. Energy consumption – and therefore cost – has declined considerably in recent years, making it a more viable alternative in coastal areas where the fresh water supply is not reliable. Desalination is a major source of drinking water in the Middle East (at least in the driest, richest parts); Israel gets 55 percent of its water from desal in cities, about one-quarter nationwide. Drought-prone Australia has also been a major adopter of desal – though five of its six plants are currently mothballed (perhaps prematurely) because the rains returned after an awful drought. The United States has lagged in adopting desal, in no small part because much of the country is blessed with rivers and lakes, gigantic aquifers and ample rainfall. Desal also


associated press/gregory bull

has a troubled history here, most notably with the plant in Tampa Bay, Fla. (the nation’s largest), which opened five years late, cost $40 million more than expected and was unable to supply the full 25 million gallons a day that was originally promised. A smaller plant in Santa Barbara, Calif., was mothballed in 1992, just months after it went live, when a previous drought ended. It is now being restarted.

a last resort? Jason Burnett, the mayor of Carmel-by-theSea, is about the last person you would expect to find stumping for desalination. A son of two marine biologists, he is a scion of the Packard family (as in Hewlett-Packard), whose fortune built the Monterey Bay Aquarium and finances countless other ocean health initiatives. He considers himself a life-

long environmentalist, pledged to reducing global warming. But he’s like those guys in the lifeboat. “We’re in a pretty desperate situation,” Burnett said. “We’ve reviewed a huge number of options and concluded desalination has to be part of the mix, in spite of the economic and environmental downside.” The Monterey Peninsula, which includes Monterey, Carmel and Pacific Grove, has long sourced most of its water from the Carmel River, but discovered a few years ago that it had no permit to do so. The state, moreover, wouldn’t grandfather continuing use because the river is a spawning ground for endangered steelhead trout. The governments of these deep-pocket communities explored every conceivable alternative, including harnessing icebergs and towing them down the

First Quarter 2016

9


trends coast, and filling huge balloons with water up north and floating them south. Desalination was the only viable one. “We already conserve as well or better than anyone in the state,” Burnett said, noting that the peninsula’s 100,000 residents use an average of just 55 gallons per day per household. A 2011 study estimated that the average California household used more than 360 gallons (though this figure has fallen during the drought). Burnett ticked off the drawbacks. “You have to figure out how you’re getting the source water; you have to figure out what to do with the brine. It uses a lot of energy. Even on an operating basis, if someone gave you a large desalination plant for free, it would not be economical to run” under ordinary conditions. Including plant amortization, Monterey’s water will cost about $2,000 an acrefoot (about 326,000 gallons), ten times the national average. Burnett said Monterey was going to great lengths to mitigate the effects on marine life. Instead of gulping seawater directly, which tends to kill fish, eggs and larvae, the plant will take in 2,000 gallons a minute through pipes buried 200 feet under the ocean floor; the sand above will serve as a filter and diffuser, thereby only minimally disrupting marine reproduction. The doubly salty brine left over from desalination will be diluted by mixing it with a nearby storm sewer outfall and using pressurized diffusers to spread the saline discharge, which would otherwise tend to linger on the sea floor to the detriment of the local ecology. This feature alone will add $10 million to the project’s cost, Burnett said. The Monterey desal plant is expected to take about two years to build, with at least another year after that to complete the permit-

10

The Milken Institute Review

ting process. So it may not begin delivering water to customers for several years.

planning for the next drought Four hundred miles to the south in Carlsbad, Calif., a plant yielding more desalinated water than all the U.S. desal facilities combined will come online next year. It has been 12 years in the making, born in reaction not to the current drought, but the previous one – or the one before that. “Our project in Carlsbad sort of grew out of the concerns of the 1990s, but it’s really about more than a given drought event,” explained Carlos Riva, chief executive of Poseidon Water, a water infrastructure specialist based in Boston that is building the plant. “San Diego County came to a decision to diversify its water supply; the county was 90 percent dependent on imported water. They wanted to gain more control over their water supply by sourcing it locally.” Carlsbad will be the largest desalination plant in the Western Hemisphere, providing 10 million gallons of desalinated seawater per day when it first comes online and ultimately up to 50 million per day. (The largest in the world is, no surprise, in Saudi Arabia; that plant has the capacity to desalinate some 260 million gallons a day.) The $1 billion Carlsbad project will produce enough drinking water to serve 300,000 people, providing 7 percent of San Diego County’s total supply by 2020. Though Poseidon Water is financing construction, water users will, of course, pay for the plant through increases in their water bills, expected to run to $5-7 a month. But that difference may narrow because the price of water currently imported from overtaxed rivers far to the north and east has been rising every year. “Just to permit something like this and get it to the point where you can raise the financ-


reuters/mike blake

Reverse osmosis filters in the Carlsbad plant

ing was quite a complicated and difficult exercise,” Riva said. “Because this was the first large-scale desalination plant coming down the pike, the regulators didn’t have any experience to draw on. Plus, we had a lot of resistance from the environmental movement; there were 12 legal challenges.” All of those challenges were beaten back. Poseidon is now in the late stages of developing a similar-sized plant 60 miles up the coast in Huntington Beach, which is scheduled to be operational by 2018. Riva said that he believed there could be several more like it along the California coast and that the start-to-finish time could drop to as little as three years. “The depiction of what desalination does to the environment has been painted as very severe,” Riva noted. “But when people see

how low impact and low visibility the Carlsbad facility is, opinion will change.”

counting the costs There’s really no question that desalination plants are costly and have some adverse impact on the environment. But that has been true of every water supply project since the Roman aqueducts. The tortuous history of southern California development is inextricably tied to water imported from elsewhere, as brilliantly depicted in Roman Polanski’s 1974 neo-noir film, Chinatown. Bringing water south from the Sacramento, San Joaquin and Colorado rivers consumes a great deal of energy, loses a great deal of water to evaporation and has turned major waterways into concrete-lined

First Quarter 2016

11


canals. But, then, pumping water from aquifers and damming rivers to create reservoirs generate their own unpleasant externalities. It takes 3,460 kilowatts per acre-foot to push water from northern California to San Diego County. The Carlsbad project will use about 30 percent more energy to desalinate ocean water and deliver it to households, according to Poseidon’s report to the Department of Water Resources. That’s an acceptable difference, especially in light of the diminishing availability of water as northern Califor-

12

The Milken Institute Review

nia suffers from its own drought and demand for Colorado River water outstrips supply. Perhaps a better benchmark is Israel, which is not awash in cheap energy and has a level of environmental awareness similar to the United States. “We sell desalinated sea­ water to the government in Israel for around $900 per acre-foot,” said Udi Tirosh, business development manager for IDE Technologies, the Israeli company that is supplying the reverse osmosis system for Carlsbad. “That’s less than you pay in California for just trans-

courtesy of poseidon water

Carlsbad desalination plant under construction, September 2015


ferring the water from north to south. When desal is there, you don’t have to wait for rain or snowpack on the mountain. During the drought 10 years ago, … San Diego just did not get water.” Tirosh said the environmental impact of desalination had been modest. “We are measuring the brine disposal effect on sea fauna and flora. … We see that there is an influence, but it is confined to 300 to 400 feet around the head. And even within the region, it’s not a death zone. It’s more an area where species

that can survive the higher concentration are flourishing and other species avoid it.” The environmental groups that fought the Carlsbad plant did not offer a lot of specifics to flesh out their objections. Sara Aminzadeh, the executive director of the California Coastkeeper Alliance, told The New Yorker: “It’s just not a good option from a cost and energy standpoint. Desalination may seem like a panacea, but it’s the worst deal out there.” To be fair, there aren’t a lot of specifics to be had. Other than the studies that the desal companies have done themselves, there is not much data on the environmental effects of the process. “While we have a number of desalination plants around the world,” explained Heather Cooley, co-director of the Pacific Institute, a research organization focusing on sustainability, “this is not an issue that’s been well monitored. Some are old; some are in regions that don’t have the environmental sensitivities we have in California. Also, the impacts are site-specific.” The source of the energy used to desalinate seawater is another concern. Burning climate-warming fossil fuels to generate freshwater amounts to robbing Peter to pay Paul. In Australia, the climate change dilemma was eased by purchasing renewable energy offsets equivalent to the power consumed by the desalination plants. “So far, none of the plants in California have proposed to do that,” Cooley said.

consider the alternative In the digital era, humanity has grown accustomed to exponential gains in productivity even as costs fall. But, unfortunately, water does not obey the magical laws of silicon chips. The power consumption of desalination has indeed come down; reverse osmosis now uses about one-fourth the energy it

First Quarter 2016

13


trends required in the 1970s. But no one sees equivalent gains on the horizon. There’s no getting around the reality that pumping seawater through a membrane fine enough to separate the salt requires a lot of energy. “The misleading thing about desalination is people see it as a silver bullet,” pointed out David Henderson, managing director of XPV Capital, a Toronto-based private equity firm specializing in water technology. “Water is heavy, and this is a downside to desal that people don’t think about. If you’re in Los Angeles, right on the ocean, it makes some kind of sense. But most of your water isn’t consumed there; it’s inland, in farm regions. It’s eight pounds to the gallon; whenever you move something very heavy, it takes a lot of energy, and it becomes very expensive.” Henderson said reuse makes more sense in many locations. “If you can recycle that same drop of water on the same location, over and over again, that is your big winner,” he said. “People talk [disparagingly] about toilet-totap, but the way we’ve urbanized the world, everything is toilet-to-tap. Everybody is downstream.”

a sensible mix One of the more irritating habits of good financial advisors is that they almost always recommend a diversified portfolio of lowcost mutual funds, when what you really want is to buy some hot tech stock that will make you financially independent. Investing in our water future is a bit like that. While it’s tempting to look to a tech fix like reverse-osmosis desalination as the answer, the real solution is a broad portfolio that includes conservation, reuse and smarter use as well as desal. In the 20th century, we didn’t have to think about water, explained Charles Fishman in his book The Big Thirst. But no more. Even 14

The Milken Institute Review

the famously moist Pacific Northwest had to ration water last summer because an uncommonly warm winter decimated the mountain snowpack. And though early rains this fall have caused flooding in southern California, no one is predicting the return of easy water. “Smart water managers now think in portfolio terms,” Fishman said. “The old fat way was, ‘I’ve got reservoirs, I’m fine,’ or ‘I’ve got a river, that’s my source of water, I’m fine.’ There’s no water manager thinking that way now.” Desalination has earned a place in a diversified water supply portfolio. In the annals of seawater desalination, the plant in Perth, Australia, is often held up as the right way, in contrast to the wrong way at the troubled Tampa Bay plant. The Perth facility was finished in less than two years, and its power consumption was offset by the utility’s investment in renewable energy. It supplies about 40 million gallons a day, 17 percent of the water used in a territory bigger than Western Europe. But the most interesting aspect of Perth’s success is that a second installation of comparable size was never constructed. “In the course of building that plant in two years, they actually taught their customers to save 40 million gallons a day,” Fishman said. “They eliminated the need for Phase 2 through conservation.” Which makes for a sturdy maxim: the best desalination plant is the one you never need to build. California has already done much to reduce water use through a mixture of voluntary efforts, warnings, fines, and what The New York Times called “a culture of nagging.” But it should be kept in mind that the cheapest gallon of water is almost always on the demand side of the equation, where watersparing technology, cost-based market pricing and more of that California-style nagging could make a huge difference.


Ethiopia’s

Long, Hard

associated press/rebecca blackwell

Road

by robe rt loon ey

First Quarter 2016

15


F

But the continent has since surprised just about everybody. Six of the world’s 10 most rapidly growing national economies over the past decade have been in Africa. Blanket optimism has replaced blanket pessimism. The Economist, which as recently as 2000 labeled Africa “hopeless,” made a 180-degree turn in 2013, rebranding the continent as “hopeful Africa” and predicting that “the next 10 years will be even better.” The ROB E RT LO ON EY teaches economics at the Naval Postgraduate School in Monterey, California.

16

The Milken Institute Review

magazine signed on to the now common narrative that broad-based growth would follow from a virtuous circle of accelerating investment, stepped-up regional trade and the rise of a middle class with more disposable income and – probably more important – a major stake in efficient government and political stability. But putting a magnifying glass to the big picture reveals some less-than-rosy details in some of the most celebrated success stories. Africa’s collective growth has not resulted from countries across the continent adopting

westend61

For decades, the economies of sub-Saharan Africa were, to put it euphemistically, laggards. With rampant corruption, ethnic strife and miserable governance the rule rather than the exception, Africa’s prospects were routinely written off as hopeless well into the 1990s.


economists have long been skeptical that an East Asian strategy could be sustained in the teeth of ethnic heterogeneity, widespread corruption and rent-seeking, and woefully deficient governance. But Ethiopia is claiming the role of the not-so-little engine that could, an example for the rest of Africa to follow. And at least by one key metric – growth in per capita income – it has a case. The big question is whether the initial successes of an economic strategy in a country that is still one of the poorest on the planet and is facing growing resistance to its antidemocratic practices can meet its goal of becoming a middle-income country by the end of another decade.

a development model based on the bold economic liberalization and improved governance advocated by the West. Rather, authoritarian governments have assumed the growth leadership role – notably in Ethiopia, which ranks a dismal 145th, just behind Venezuela and ahead of the Central African Republic, according to the latest Cato/Frazier/ Naumann Human Freedom Index. Ethiopia’s government credits its impressive growth to the adoption of the developmental-state model of the East Asian tigers. This, on its face, is suspect, since development

the developmental state For the past quarter-century, the World Bank and the IMF – and most Western-influenced development specialists – have united behind the so-called Washington Consensus prescription for development. It’s a now familiar mix of macro-stabilization, free markets and openness to trade and investment, leavened by a moderate role for government as provider of public goods and enforcer of the rule of law. And, indeed, a more or less consistent application of the formula has been credited with awakening many of the economies of sub-Saharan Africa. But Ethiopia has been marching to the beat of a different drummer. The idea that an African economy could prosper using a variation on the one that brought much of East Asia out of poverty in the 1970s and 1980s began as the master’s thesis of Meles Zenawi, later to be Ethiopia’s prime minister, when he was at Erasmus University in Rotterdam. At the core of the Asian model is a highly professional group of technocrats who administer it. While officially attached to a ministry, the technocrats maintain a high degree of

First Quarter 2016

17


autonomy. They cultivate close links with (but incur no obligations to) the business elites. The Asian developmental state model allows the technocrats to implement strategies that leverage private initiative without wholly unleashing it. Meanwhile, at least in its early years, the state’s political legitimacy lies in its success in delivering the goods rather than in faithfully responding to the voices of the people. Indeed, it took decades for South Korea and Taiwan to make the transition to democracy. Meles argued that conditions in Africa were not conducive to government that channeled a rational growth policy through private enterprise. Without direct government intervention, corruption would remain pervasive. Rent-seeking – using market power or political pull to cream off unearned profits – would dominate, diverting investment into wasteful projects and cash into the bank accounts of the elite. Thus, rather than let Adam Smith’s invisible hand work its magic, Meles believed that the state itself must capture the fruits of growth and plow them back into productivity-enhancing capital. Following this philosophy, his variant of the developmental state can be characterized as authoritarian developmentalism – that is, prioritizing state-directed growth and investment over private initiative. Think of it as topdown rather than bottom-up development. In contrast to the outward-oriented East Asian tigers, Meles looked inward, initially giving agriculture priority as the bedrock for growth. To this end, Ethiopia subsidized farming and protected it by clinging to an overvalued currency at the expense of export competitiveness. Incidentally, both these deviations from the Asian brand harkened back to the early 1990s roots of Meles’s party as a ruralbased, peasant-backed liberation movement.

18

The Milken Institute Review

Following Meles’s death in 2012 at the age of 57, his dominant, long-ruling Ethiopian People’s Revolutionary Democratic Front emphasized there would be no change in economic policy. And, indeed, under Meles’s successor, Hailemariam Desalegn, the party has remained committed to the top-down model. The government has stuck by a five-year plan adopted in 2010 with a goal of a phenomenal 11 percent GDP growth annually, which has almost been met.

giorgio cosulich/getty images

ethiopia


A cursory comparison of the Ethiopian economy’s overall progress in the 10 years after the establishment of its developmental state (2005-15) and the 10 years before (19942004) suggests some fairly impressive results. In the earlier period, Ethiopia’s economy grew at an average rate of 5.5 percent – substantial by African standards and often attributed to Meles’s initial success in achieving political stability. But growth vaulted to an average rate of 10.4 percent for the decade

following the adoption of Meles’s development strategy. Much of this acceleration can be attributed to stepped-up investment as a share of GDP, which increased from 19 percent to 28 percent. All of the net increase was financed internally, as the gross national savings rate rose from 15 percent to 24 percent. There are glimmers here, too, of a payoff from the state’s choice to push growth through government intervention rather

First Quarter 2016

19


20

The Milken Institute Review

capita income and schooling, improved on average by 3.5 percent annually between 2005 and 2013.

underperformance, ethnic tensions and corruption That, however, is only half – the brighter half – of the story. While these broad indicators suggest a successful introduction of the developmental state in Ethiopia, closer analysis reveals serious underlying weaknesses, beginning with the flagship agricultural sector. The foundation of the ruling party’s development strategy was accelerating growth through agricultural-development-led industrialization. The policy rationale offered by

©john warburton-lee photography/alamy

than through market deregulation. The portion of the government budget directed at helping the poor doubled. And the one-two punch of rapid economic growth and a tilt toward the poor made a very real difference in the living standard of low-income Ethiopians. According to the United Nations, the portion of the population classified as poor fell by one-third between 2004-5 and 2012-13. Arguably most striking, the government found the will and a way to avert widespread famine during the 2011 drought, breaking an age-old pattern of mass suffering and death whenever the rains failed. Meanwhile, Ethiopia’s scores on the UN’s Human Development Index, which reflects life expectancy, per


the ruling party was that government investment would increase agricultural productivity (and thus enhance food security), while stimulating the development of new industries, such as food processing. That, in turn, would create internal demand for increasingly sophisticated, locally produced agroindustrial products. From a comparative-advantage perspective, it made intuitive sense for a labor-rich, capital-poor country like Ethiopia to initially focus on labor-intensive agriculture, making use of fertilizer, improved seeds and irrigation rather than mechanization to increase yields. The plan was also consistent with the ruling party’s goal of broad-based rural shareholder prosperity as the anchor of political stability. All land had been nationalized during the catastrophic upheavals following the death of Emperor Haile Selassie, wiping out the landholding elite and redistributing user rights (but not ownership) to smallholders (i.e., holders of small, cultivatable parcels). Until recently, the ruling party largely maintained this arrangement, arguing that land privatization and consolidation of ownership would add to food insecurity and displace the peasantry. Thus, agricultural-development-led industrialization advanced the multiple goals of food self-sufficiency, equitable growth and smallholder security. But agriculture has not performed as anticipated. In particular, there have been only limited improvements in productivity and the marketable surplus remains small because of a shortage of land, failure to increase the supply of improved inputs and volatile crop markets that make investment risky for smallholders. One consequence is high grain prices, which erode living standards. Another is the anemic production of crops such as edible

oils that serve as industrial inputs, which implies the failure of smallholder farming to contribute to industrialization through forward linkages. Ironically, foreign food aid has allowed the government to maintain agricultural-development-led industrialization along with ideological resistance to privatization and farm consolidation. The latter would both raise productivity and facilitate growth of crops with export potential and the desired forward linkages to industrialization. In Ethiopia, with its vulnerability to severe drought, food security remains a prime concern. And while the government did manage to prevent disaster in 2011, the UN’s Food and Agriculture Organization still classifies over seven million Ethiopians (from a total population of just under 100 million) as “chronically food insecure.” The Economist

Agriculture has not performed as anticipated. One consequence is high grain prices, which erode living standards. Intelligence Unit’s 2015 Global Food Security Index ranks Ethiopia 86th of 109 countries – up from 100th in 2012, but hardly much to brag about. In an acknowledgment of these cracks in the model’s facade, the government has modified its agricultural policy in recent years to attract new investment without truly abandoning agricultural-development-led industrialization. Large foreign agribusinesses are now permitted to lease big acreage, with the government actively encouraging exportoriented investments. The idea is to generate more foreign exchange, which, in turn, can

First Quarter 2016

21


help the country achieve greater food security and industrialization by another route. This shift to an export-oriented agricultural strategy has resulted in reduced support for locally generated investment and smallholder-based food production as the anchors of the economy. While pragmatic, it is not without significant risks. Volatile international food prices and foreign-exchange earnings increase the risk of food shortages beyond the government’s control. Furthermore, the shift away from smallholdings raises the specter of renewed political instability. Soon after coming to power in 1991, the ruling party adopted a constitution that structured the country as an ethnic federation. The goal of this unusual political structure is to maintain Ethiopian unity by protecting the equality of the country’s multiple ethnic groups. In principle, the constitution guarantees each major ethnic group the right to self-determination, and even secession from the federation if so desired. Until 2009, each of Ethiopia’s nine ethnically delineated regions independently allotted land according to its own criteria. That’s in keeping with regions’ constitutional rights. However, Ethiopian policymakers became concerned over the size and terms of many land transfers in peripheral regions. They claimed that less than one-fifth of the 8,000 foreign and domestic applicants who were approved by regional governments between 1996 and 2008 had begun project implementation, and many parcels were being used for unapproved purposes. In 2009, the central government cracked down, establishing the Agricultural Investment Support Directorate, which has coopted responsibility for land leases to foreigners as well as leases of parcels over 5,000 hectares (a bit more than 12,000 acres)

22

The Milken Institute Review

to domestic investors. And not surprisingly, the regions are unhappy – an ominous reality in a country in which regional governments represent individual ethnic groups. To justify the encroachment on the use rights of the locals, the government is arguing that the large parcels of land thus far leased in outlying ethnic regions were previously empty. But while population density in these regions is comparatively low (80 people per square mile, about the density of West Virginia), it is inaccurate to describe the uncultivated areas as empty. Much of the land in question constitutes traditional village or pastoral commons, and the government’s leases and follow-on agribusinesses have disrupted rural life. In one documented case, some 34,000 members of the Suri tribe lost their grazing lands to a Malaysian group setting up a palm oil plantation. The result was to impoverish the group and to ignite violence between them and other local tribes. If such conflicts spread to other regions, they could further undermine not only the government’s efforts to implement its agriculture-led industrialization strategy, but also the stability of the country’s delicately balanced political system. Other developments are similarly discouraging. Corruption, which Meles used as a rationale for strict government oversight of the economy, remains rampant. According to the World Bank’s Worldwide Governance Indicators, Ethiopia scored in the 38th percentile for control of corruption in 2004, a dramatic improvement over 1996, when Ethiopia made it no further than the eighth percentile. By 2009, however, the country had regressed to the 26th percentile and did not return to its 2004 high until 2013. Patronage and rent-seeking, the main forms of corruption in Ethiopia, are particularly disruptive, because they undermine the credibility of government technocrats

©eric lafforgue/alamy

ethiopia


who have been given considerable discretion in planning and administering the developmental-state strategy. Corruption is especially problematic in manufacturing, where members and supporters of the ruling-party-dominated government have been granted protection from domestic and foreign competition. A key goal of the five-year Growth and Transformation Plan adopted in 2010 was to expand the share of manufacturing in GDP from 13 percent to 19 percent. Instead, the sector’s share actually contracted in the decade following the adoption of the developmental-state model. In comparison, after 10 years of high growth and reforms under a developmentalstate strategy, the share of manufacturing was

33 percent in Taiwan (1977), 24 percent in South Korea (1979), 21 percent in Thailand (1979) and 18 percent in Vietnam (2009). Rent-seeking in Ethiopia’s manufacturing sector has become so pervasive and its effects so corrosive that the government recently set up special industrial zones in areas well removed from the capital in an effort to breathe a little competition into the sector. Thanks to the underperformance of manufacturing, the government has been unable to leverage the sector’s role in the economy. Consequently, little progress has been made in efforts to diversify exports, with the share of manufacturing exports virtually unchanged in the 10 years the developmental state has been in place.

A forced loss of Suri grazing lands impoverished the group and ignited violence between them and other local tribes. Further conflicts could undermine the stability of the

tk

delicately balanced political system.

First Quarter 2016

23


Meanwhile, the share of exports in GDP has been halved during the developmentalstate period, to just 7 percent. At the same time, imports have increased to 15 percent from 9 percent. Accordingly, the currentaccount deficit rose sharply during the decade of the developmental state. And that deficit must be financed with a combination foreign capital and remittances from Ethiopian expatriates. Equally troubling from the perspective of Ethiopia’s statedriven development model, government revenues as a percentage of GDP fell by almost a point across the decade, limiting the government’s ability to finance projects directly. The growth of manufacturing and the planned shift toward export-oriented activities are also impeded by the low quality of Ethiopia’s infrastructure. According to World Bank metrics, it is improving, but slowly: the Bank’s Logistics Performance Index score rose to 2.17 from 1.88 (5 is the highest) from 2007 to 2014, implying that Ethiopia’s infrastructure is still inferior to that of other countries in the region and in Ethiopia’s income group. One reason is that much of Ethiopia’s economy, including telecommunications, banking and insurance, power, transport and most tourism, remains off-limits to foreign investors. The government has resisted advice to liberalize these sectors. And it will most likely continue to do so as long as members of the ruling party’s old guard hold important positions in economic management. With infrastructure creation dependent on funding from modest (and stagnant) government rev-

24

The Milken Institute Review

enues, it is unlikely that Ethiopia’s transport, power and communications will improve sufficiently to make the country’s exports competitive anytime soon. On top of corruption and deficiencies in infrastructure, Ethiopia’s manufacturing sector suffers from a critical lack of entrepreneurial activity. While the East Asian tigers were initially handicapped by similar defi-

ciencies, the problem was addressed through programs that both encouraged entrepreneurs and provided financial support to allow them to take on more risk. To be sure, East Asia could look to traditions of entrepreneurship, while Ethiopia can’t. But the government apparently isn’t helping. Ethiopia ranks 134th among 142 countries (1 being best) on the Opportunity and Entrepreneurship measure of the 2014 Legatum Prosperity Index. South Africa and Botswana, African states that have employed more democratic variants of the development-state strategy, rank 44th and 72nd, respectively.

unicef/boris heger/epa

ethiopia


what next? On the one hand, Ethiopia’s growth strategy has produced a decade-long surge that would have been highly unlikely had development been left to free-market forces alone. On the other, problems now looming in agriculture will likely plague the government and economy for years to come. As the controversy surrounding agricultural-development-led industrialization illustrates, there is a basic incompatibility between centralized control of the economy and Ethiopia’s federal system of government.

(measured as a portion of GDP). Government revenues, the primary source of funds for investment, are lagging. Clearly, Addis Ababa cannot keep up its pace of expenditures without either borrowing from the domestic banking system (at the risk of inflation) or incurring a large external debt (at the risk of higher interest costs and damage to the country’s credit rating). And without rapidly growing investment, the phenomenal growth of the past 10 years is likely unsustainable. A little perspective is needed here. The Ethiopian economy was a wreck when Meles

while Ethiopia has plainly made real progress in quality of life as measured by the U.N.’s Human Development Index, it still ranks a miserable 173rd — a reflection of just how deep its socioeconomic problems run. Another key issue involves the source of Ethiopia’s recent decade of ebullient growth. In much of Africa and Latin America, growth during this period was largely driven by the boom in commodity prices. Since this was not the case in Ethiopia, defenders of the country’s economic polices argue that the credit should go to the developmental state model. They ignore, however, that there is a rich history of rapid early-stage autarkic growth financed and commanded by authoritarian governments. Think of the Soviet Union and China, which seemed to be the economic wonders of the world for the first few decades after World War II. In both of these cases, it turned out that the delay in switching to market-driven systems slowed the maturation of the economy. The more immediate problem is Ethiopia’s dependence on government investment, which is now the third highest in the world

embarked on his most-excellent adventure in authoritarian developmentalism. And, in spite of a decade of rapid growth, it’s still suffering, even by the modest standards of East Africa. Measured in terms of purchasing power, per capita income is just $1,500. Moreover, while Ethiopia has plainly made real progress in the quality of life as measured by the UN’s Human Development Index, it still ranks a miserable 173rd in the world on the index – a reflection of just how deep its socioeconomic problems run. For example, one baby in 20 still dies in infancy, and less than half the population can read. Will Ethiopia’s government, founded on developmental capitalism and now committed to it by reason of both ideology and the interests of its ruling class, be able to change paths before it is overwhelmed by ethnic division? It seems improbable. But, then, in 2005, nobody expected Ethiopia to become the economic darling of the New Africa.

First Quarter 2016

25


restoring

L

Like just about everyone else, I find it hard to avoid the conclusion that the American body politic is suffering acute malaise. Disengagement from public affairs has morphed into pervasive pessimism that the country can rise to today’s great challenges ranging from climate change to wage stagnation. It’s easy to lay this pessimism at the door of a feckless political class that marches to the tunes of powerful interest groups or uncompromising ideological fervor. Yet a more fundamental disease underlies much of what ails Washington today: how tightly yesterday’s policymakers have bound the hands of today’s policymakers, preventing us from determining our own destiny.

26

The Milken Institute Review


l a c s i f democracy

by e ug e n e st e u e rle

This reality has done great damage, deterring the larger-scale reforms needed simply to adapt to modern circumstances. At a time of extraordinary need (and opportunity), it has turned the federal budget into that of a declining nation. Ever less is invested in our future, even as programs designed decades ago balloon in size and lose focus on the problems they were intended to solve. It has significantly weakened recent presidents and Congresses, and it will completely hobble future ones unless we confront the fiscal legacy that locks us into decline.

First Quarter 2016

27


how did we get here? For several decades both political parties have resolved the conflict over competing budget goals by kicking the can down the road, allowing rapid growth in popular programs

By freeing today’s policy­ makers from commitments made decades ago, they could shift much more focus toward 21stcentury priorities. without paying for them. Unusually high structural deficits are but one symptom. More ominously, Washington has locked the government into promises that, unless broken, would absorb all fiscal resources that almost any conceivable rate of economic growth could generate. The aging of the population explains only a small part of these phenomena. Less well understood, entitlement programs are becoming more expensive per beneficiary. Social Security and Medicare benefits are scheduled to grow from about $1 million for an average couple turning 65 today to about $2 million (adjusted for inflation) for today’s 35-year-old couples when they retire. And frustratingly, a big piece of the difference will end up covering higher compensation for health-care providers and drug companies, rather than enhanced services. Meanwhile, tax subsidies metastasize. Case in point: tax benefits for homeowners grow without review as every generation builds larger and better-equipped houses.

E U G E N E STE U E R LE, a former deputy assistant secretary of the Treasury who is now a fellow at the Urban Institute, is the author of Dead Men Ruling: How to Restore Fiscal Freedom and Rescue Our Future.

28

The Milken Institute Review

As promises for eternal program growth became enshrined in law and occupied an ever larger share of budget turf, permanent tax cuts were enacted, adding further to projected future debt and compounding interest costs. This latter effort was championed by Jude Wanniski, a Wall Street Journal editorial writer who in the mid-1970s argued that the Republican Party needed to become the Santa Claus of tax reduction to compete with the Democrats’ Santa Claus of spending. Wanniski finessed the issue of the resulting deficits by declaring there wouldn’t be any – that tax cuts would pay for themselves. That never worked out. But Santa Claus was just too attractive a character for political donors to forsake in favor of the Grinch. Indeed, the Republican Party built its popular support around the idea that, no matter what, taxes couldn’t be raised. One way to gauge how much the fiscal world has changed is through what I call the “fiscal democracy index.” This measures the share of federal revenues left after subtracting spending committed by permanent programs that don’t require ongoing Congressional approval. Note that the index is politically neu-

gary neill

fiscal democracy


tral in the sense that it records the impact of both automatic spending growth and tax cuts. For the first time in U.S. history, the fiscal democracy index turned negative in 2009. That is, every dollar of revenue had been committed before the new Congress first walked through the Capitol doors. The index turned positive again, thanks to higher revenues and declining safety net payments as employment recovered after the end of the Great Recession. But the automatic growth of entitlement programs over the next decade, along with rising interest costs, will push it back toward zero.

the cure In theory, it would be simple to prevent yesterday’s policymakers from dictating tomorrow’s policies. All that would be needed is a rule that limits the share of future government spending (including rising interest costs because of deficit-enhancing tax cuts) that could be preordained by Congress. Current and future legislators could actually legislate more new spending or tax cuts, and, in a growing economy, there would be more to give away in most periods. But there would be less legislated from the past, so Congress would be

STEUERLE-ROPER INDEX OF FISCAL DEMOCRACY PERCENTAGE OF FEDERAL RECEIPTS REMAINING AFTER MANDATORY AND INTEREST SPENDING

70% 60 50 40 30 20 10 0 -10

1962

1972

1982

1992

2002

2012

2022

source: C. Eugene Steuerle and Caleb Quakenbush, 2015, Washington, DC: Urban Institute

forced to vote anew on the extent to which rising Social Security benefits should be allowed to crowd out funds for, say, education and medical research, or whether tax expenditures on million-dollar condos should take priority over a credit for first-time homebuyers. Not to mince words here: restoring a flexible budget process that focused on what we can do with new revenues, rather than what we can’t do when large future deficits are already legally preordained, would do far more than “solve” the debt problem. It would fundamentally change budget priorities. How? By freeing today’s policymakers from commitments made decades ago, they could, and I believe would, shift much more focus toward what I believe to be 21st-century priorities: increasing individual opportunity, promoting childhood development and socioeconomic mobility, supporting workers, repairing public infrastructure and increasing innovation through subsidized R&D.

First Quarter 2016

29


OBAMA ADMINISTRATION PROPOSED BUDGETS

DIFFERENCE BETWEEN 2015 AND 2025 (BILLIONS OF 2015 $)

CHANGE

Total Outlays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $1,333 Discretionary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (102) Defense. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (55) Nondefense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (46) Mandatory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,015 Social Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Medicare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Other Health. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Other (including allowances and offsetting receipts). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Net Interest. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Revenue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,348 Deficit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (15)

Of course, wishing won’t make it happen. To get from here to there we must reframe the debate surrounding how we spend and tax. I’m not suggesting zero-based budgeting, where every expenditure would be up for grabs on a regular basis. That might be nice, but it is plainly a bridge too far. Instead, we should seek the type of balance we had throughout most of our political history, when many programs routinely came to an end and continuing programs, since they did not grow automatically, had to compete more evenly with new ones for a share of the rising revenues that accompanied economic growth. Our current practice of measuring program or tax cuts increases only from an unsustainable baseline of promises, then attacking anyone who reneges on those promises, explains much about today’s political gridlock. Politicians who must give away money cooperate a lot better than those who must take it back.

opportunity or austerity? Contemporary doomsayers have their fair share of calamities in the making to choose 30

The Milken Institute Review

from. Growth rates are lower, while indebtedness accumulated through peacetime budget deficits has never been higher. The last recession was more severe than any in recent memory, while asset markets seem as prone as ever to bubble-like instability. Retirement trust funds – state and local funds as well as Social Security – owe more than they are slated to collect. Spending on health care is nearly double that of other advanced economies, yet outcomes are no better. But a little perspective is in order. We’re collectively rich, and considerably richer than most other advanced industrialized democracies: in terms of purchasing power, per capita income is roughly one-third higher in the United States than in the European Union. We enjoy longer life expectancies, better health care, dirt-cheap telecommunications, more travel than in the past. (Yes, airplane seats are cramped, but don’t pretend you’re longing to pay twice as much to fly at half the speed in roomy propeller planes.) Economic growth has slowed considerably since the post-World War II boom decades. But even if the rate stuck at 2 percent indefinitely, the economy would still double in about 35 years. Of course, whether that growth rate will be lower or higher than 2 percent depends in part on how we set our spending and tax priorities. Now consider how federal spending, assuming modest economic growth and no alteration in priorities, would change. The projections in the table above reflect what President Obama proposed for 2025 relative to 2015. Spending would rise by over $1.3 trillion in real terms, or more than $7,000 annually per household. Yet virtually all the extra money would go to sustain existing policies, including rising interest on debt. Moreover, that $1.3 trillion doesn’t count several hundred billion more


the disease vs. the deficit Confusion about what’s wrong tends to increase when people focus on the types of near-term deficit-cutting agreements we’ve had in recent decades. Yes, it is only common sense that debt can’t rise forever relative to GDP. But simply capping the debt-to-income ratio by limiting deficit spending to no more than the rate of growth of GDP would be much too rigid to serve the economy’s longterm interests – or for that matter, to stand under the pressure of events. That’s because structural deficits arising from past longterm commitments are a very different animal from short-term deficits that arise from financing, say, a military intervention or an increase in recession-driven spending. For most of our history, there was no such thing as long-term deficit projections. Budget offices didn’t create 10- and 75-year budgets based on current-law commitments, in part because few commitments were made far into the future and budget priorities were expected to be very different a decade down the road. Equally important, tax revenues were expected to grow far faster than the long-term

TRADITIONAL BUDGET

REVENUES INCREASE WITH ECONOMIC GROWTH. SPENDING INCREASES ONLY WITH NEW LEGISLATION. $20 REVENUES

REAL DOLLARS

15

Long-run surpluses

10

SPENDING

Short-term deficits

5

0

2

4

6

8

10

12

FUTURE YEARS

14

16

18

20

TODAY’S BUDGET

SPENDING SCHEDULED TO GROW AUTOMATICALLY FASTER THAN REVENUES $40 SPENDING

35

30

25 REAL DOLLARS

on tax expenditures based on current law. What is obvious from this budget is that the increased spending goes almost entirely for retirement, health care and interest on the debt. More spending on children, infrastructure, wage subsidies for the poor – almost anything we might think of as an investment in productivity and mobility – is reduced in relative importance. And keep in mind that the Republican budgets that have passed Congress look remarkably similar in terms of spending priorities, even though overall spending would fall with their steeper cuts in domestic discretionary spending and repeal of the Affordable Care Act. This, in a phrase, is budgeting for a declining society.

Widening long-run deficits

20

REVENUES

15

10

5

0

2

4

6

8

10

12

FUTURE YEARS

14

16

18

20

source: Author’s illustration

First Quarter 2016

31


fiscal democracy spending commitments that did exist. Hence, the future promised only huge surpluses under current law, even if there was a deficit in the current year. This is illustrated in the top part of the figure on the previous page. Therefore, in earlier days, the principal cure for “deficits” was simply to moderate the pace of new giveaways, whether they were in the form of spending increases or tax cuts.

the economy grew itself out of the postwar debt-to-GDP ratio is often cited as reason to be sanguine. The big difference, of course, is that debt declined relative to GDP without elected officials having to renege on longterm commitments to constituents. Even Lyndon Johnson’s surtax to pay for the Vietnam War, one of the few counterexamples, lasted only two years, and the same enactment contained permanent tax cuts that eventually

Because automatic spending growth now exceeds automatic revenue growth, no short-term belt-tightening can make much difference. We simply fall back into our old problems. The bottom part of the figure illustrates today’s budget struggles. Here, again, we start with a current deficit. But because automatic spending growth now exceeds automatic revenue growth, no short-term belt-tightening can make much difference. We simply fall back into our old problems. The only time in U.S. history in which debt was higher relative to GDP than it is today was at the end of World War II. But the debtto-GDP ratio subsequently underwent a long (and welcome) period of decline, reaching a nadir in 1974. This period provides a stark contrast to the current one. Then, spending was largely discretionary and future revenues, partly because of wartime taxes that were never completely rescinded, always exceeded growth in precommitted spending. Economics textbooks during that period emphasized the risk of fiscal drag in which budget surpluses would reduce economic growth below capacity – not the risk of long-term spending commitments outrunning revenues. Hence, today’s political conflict over how and when to contain chronic deficits has no parallel in the prior period – though the way

32

The Milken Institute Review

more than offset the temporary tax increases. Today, liberals fear that the projected growth of entitlements, particularly of retirement and health benefits, will be slowed by some compromise. Conservatives, for their part, worry that tax rates will be raised. And just about every politician lacking a very safe seat fears the voter’s wrath if beloved tax subsidies – notably the increasingly costly home mortgage interest deduction and tax breaks for employer-paid health insurance – were challenged. The problem of having to take back promised benefits was not a major issue before the 1980s, and even in the 1990s and early 2000s the budget did not face the additional problem of the baby boomers’ retirement. Throughout most of our peacetime history, the main issue has been how to divvy up new domestic giveaways financed by revenue growth, enhanced in post-World War II periods when defense spending relative to GDP was declining significantly. Today, Washington seems trapped in the proverbial headlights, unable to slow the ongoing squeeze on available revenues by automatic spending increases. Meanwhile,


gary neill

politicians are lulled by happy talk about unleashing economic growth and growing our way out of the debt burden the way the country did in the 1950s and 1960s.

reaping what we sowed The explosion in automatic spending is creating multiple interconnected problems that are often treated as if they were separate. We’ve already alluded to the potential instability linked to the highest debt burden ever accrued in peacetime. We don’t need to agree on the tipping point when markets react to recognize the risk.

But even if the budget could be made sustainable by a perpetual set of reductions in discretionary spending that kept debt in check, we would be charting a path for broad economic decline as the federal government withdrew from the tasks of expanding innovation and infrastructure investment, even as it turned a blind eye to issues of mobility and opportunity. Look what’s already happening to federal spending on education, or how our infrastructure no longer stands up to international comparisons, or how the portion of GDP devoted to children’s programs is scheduled to be ever-declining.

First Quarter 2016

33


fiscal democracy Another consequence of the squeeze on discretionary spending is the limited room it leaves to respond to the next recession with fiscal stimulus. Witness how the shadow of the debts accumulated during the Great Recession have left Europe unwilling to risk stimulus to treat ongoing tepid growth, or to provide financial incentives for productivityenhancing reforms that would leave the continent less vulnerable. A last, less-noted, consequence: government agencies tend to become demoralized, even moribund, when budgets are no longer built on contemporary assessments of society’s needs. Moreover, Congress has added to the malaise by simultaneously reducing staff and increasing their workloads. The remedy all comes back to restoring to economic policymaking what I’ve been calling fiscal democracy. The issue is less whether Congress should end specific programs – that’s hard to do, even when programs are obsolete or simply merit a lower priority. But forcing existing programs to compete with new ones, while letting new economic and political priorities determine where growth should occur, would effectively level the playing field.

toward a less perfect union It’s tough to get anything done in the fiercely partisan atmosphere of today’s Washington. And as if that weren’t bad enough, any new initiative (including tax cuts) faces long odds in competing with automatically growing spending programs created decades ago. As a result, younger Americans have effectively been disenfranchised by long-gone policymakers. It should not be surprising, then, when voters, who are being treated like adolescents, act like adolescents by supporting fringe candidates who reflect their general 34

The Milken Institute Review

frustrations with Washington or by ignoring politics altogether. The alienating consequences of the fiscal squeeze are further compounded by the pressure on elected officials to renege on past promises simply to get the budget on some sort of sustainable path, much less to do anything new. The public responds to optimism – think of Ronald Reagan’s “It’s morning in America” – to debate positive new directions and ideas, while politicians similarly want to play to that tendency by operating on the giveaway side of the balance sheet. But policymakers’ primary job today (and tomorrow) is to plot ways to renege on old commitments, something not appreciated at the ballot box by those whose promises are pared. The watchword of the day: if you lead, you lose. Remember how George H.W. Bush lost the White House after what by historical standards was a relatively modest budget compromise? Or how President Obama dares to ask for tax increases only from the very rich? Or how presidential candidates in all recent campaigns bend over to explain that they would require nothing more of the middle class, defined as the 98 percent of us who are neither hedge fund magnates nor welfare supplicants. It can’t be emphasized enough that the focus of both fiscal conservatives and fiscal liberals often amounts to tunnel vision. The former often aim to manage the budget in ways that merely prevent the debt burden from reaching unsustainable levels, while the latter often want to stimulate the economy whenever growth appears too low. Even with success on either front, however, we’d still be left with a budget for a declining economy that neglects the needs of the young, leaves inadequate resources to respond to the next emergency or new economic opportunity and denies the nation an open debate about fiscal


The watchword of the day: if you lead, you lose.

gary neill

priorities. Only restoring fiscal freedom makes attainable both of their goals and a futureoriented budget.

budgetary ju jitsu In itself, alas, diagnosing the illness doesn’t bring us much closer to a cure. As noted above, the issue of automatic spending and its consequences needs a radical reframing. This reframing – really a restoration of the old frame – involves a budget process for spending (including spending hidden in tax programs) that emphasizes changes from current levels rather than changes from current law. If current law has health spending growing at 4 percent annually and education spending declining at 2 percent, the fight shouldn’t be over whether health spending should be “cut” to, say, 3 percent growth or whether education spending “increased” to

minus 1 percent. Rather, we should be measuring what is really being cut or increased from current levels (counting both what is newly enacted and what is retained from past automatic spending). This would create a more level playing field for competing priorities, give elected officials room for political compromise and hold them responsible for changes they passively accept as well as new enactments. There is no fix for the “problem” that government budgets can never satisfy everyone. But the current dominant view that we are too poor to please anyone, or that austerity must dominate budgetmaking in the 21st century, or that we must meet “obligations” that dead politicians managed to secure for every interest group that could afford a K Street lobbyist, is nonsense. Fiscal democracy – democracy, period – demands a fresh start.

First Quarter 2016

35


Saving Airline Deregulation

S

by k e n n e t h but ton

Sea changes in public policy often originate in surprising places. In 1975, Sen. Ted Kennedy, proud heir to New Deal traditions of big government, nonetheless called for sweeping economic deregulation of America’s tightly regulated airline industry. “Regulators all too often encourage or approve unreasonably high prices, inadequate service and anticompetitive behavior,” he explained. “The cost of this regulation is always passed on to the consumer. And that cost is astronomical.”

36

The Milken Institute Review


klein, david (1918-2005)/private collection/photo Šchristie’s images/bridgeman images


The seminal victories, changes to air-cargo regulations in 1977 and the Airline Deregulation Act of 1978, exposed both domestic cargo and passenger service to the bracing winds of competition. Thereafter, presidential administrations from both political parties pursued the Open Skies initiative, which over the next decade or so also largely deregulated routes and fares for international air service to the United States. Airlines responded with a host of innovations. Among the most important: hub-andspoke route systems that vastly increased the

options for flying to and from midsized cities and yield-management pricing that filled seats that would otherwise have flown empty and kept fares low for travelers who could book in advance. But lurking beneath the surface of this story of deregulation that has dramatically increased productivity and left consumers far better off is the question of whether the success is sustainable. On one hand, the giant legacy carriers that stumbled through the first decades of deregulation earning little or no return on their vast investment are finally covering at least their operating costs and are apparently better positioned to confront fuK E N N E TH B UT TON is a professor in the School of Policy, Government and International Affairs at George Mason University, where he teaches economics.

38

The Milken Institute Review

ture economic storms. On the other, the consolidation that has made this possible has made it all the harder for new entrants, opening the door to anticompetitive behavior on the part of the incumbents that could undermine consumers’ gains. Indeed, it’s possible to glimpse a future in which airlines grow bolder in flexing their muscles (political as well as economic) to inhibit competition. The blocking of upstart discounter Norwegian Air International’s application to serve transatlantic routes using Irish registration and the campaign to limit flights to the United States by the hardcharging Persian Gulf airlines (Etihad, Emirates and Qatar) are early warnings that U.S. carriers are increasingly inclined to battle for anticompetitive privilege. That’s where the removal of the last glaring vestige of economic regulation of airlines in America – the denial of socalled cabotage rights to foreign carriers that want to compete directly in the domestic airline market – fits in. But I put the cart before the horse. First, some background on how we got from there to here.

the best and worst of times The U.S. domestic airline market is the largest in the world: in 2014, passengers flew almost 600 billion miles – a fivefold increase since 1975. But while the market is still growing, the pace of growth has slowed. Most travelers are largely served by hub-and-spoke networks, with flights feeding from regional airports into hubs, at which point most passengers transfer to other planes to continue on to their final destinations. The payoff: a route like Portland, Ore., to Savannah, Ga., may only serve a handful of travelers a week, but those travelers can depart from Portland on any of a dozen-plus daily flights leaving between 5:30 a.m. and 5 p.m. linking

hugh syme

ai rli n e de regu lation


through to their destination via Atlanta, Newark, Washington, Chicago or Dallas. The current stability of the airline market is new. U.S. carriers were conspicuously unsuccessful in recovering their full costs in the three decades following deregulation. Indeed, since deregulation, the operating margins of U.S. airlines have averaged around zero (yes, zero) percent. The consequences, of course, have been predictable, with many carriers disappearing through bankruptcy or merger. Some of the tribulations of the industry were clearly self-inflicted, with carriers often focusing on size rather than profitability – a common problem across many transporta-

pay more than least-common-denominator coach fares but less than stratospheric business-class fares. Travelers may grumble about this unbundling, waxing nostalgic for the days when two checked bags and a chicken dinner came with every ticket. It does, however, suggest greater appreciation on the part of the carriers of how markets work. By the same token, the carriers’ success in filling flights by means of sophisticated yield-management techniques is lamented by passengers – especially those stuck in middle seats. But apart from the fact that it is simply good business practice not to waste capacity, there is the oft-neglected social

Since deregulation, the operating margins of U.S. airlines have averaged around zero (yes, zero) percent. The consequences, of course, have been predictable, with many carriers disappearing through bankruptcy or merger. tion sectors, where management often seems focused on growth and technological improvements to the neglect of the bottom line. Government has been part of the problem, too. Washington has yet to replace the decadesoutmoded air-navigation system, which often upsets the delicate dance of hub-and-spoke connections. And it has added to travelers’ (and carriers’) miseries, with its flat-footed approaches to everything from terrorism prevention to tarmac delays. But this picture – at least the parts of the picture within the airlines’ control – has been changing rapidly in recent years, with U.S. carriers consolidating to reap greater economies of scale without butting heads with rivals. They have also moved to unbundle their services, pricing baggage carriage, food and drink and early boarding separately, and adding semi-premium seating for those willing to

gain of less pollution and less crowding of inadequate airport and air-traffic infrastructure as the average load factor rose from 51 percent in 1971 to 85 percent in the first half of 2015. Not surprisingly, the more businesslike behavior of the U.S. carriers has increased profits, and, to a degree, stabilized them. It’s hard to quibble with the notion that in the long run, investors must make a competitive market return on capital to keep the industry healthy; somebody, after all, has to pay to replace aging aircraft. And, by one indirect measure of profit adequacy – cash flow over and above current operating expenses, there isn’t much of a case to be made that the industry is coining money. Operating margins in 2014 averaged 8.6 percent, less than the 10 to 12 percent that industry analysts argue is adequate to attract capital. But reasonably stable profits have come

First Quarter 2016

39


Šdavid pollack/corbis

40

The Milken Institute Review


with increasing industry concentration, which raises some red flags. And the case for concern is worth a close look. The remaining legacy carriers, American, United and Delta, along with Southwest, accounted for 70 percent of the domestic passenger-miles flown in 2014. But on many routes there is clearly a high degree of competition – especially when travelers have options of moving through a hub in addition to direct services, as in our Portland-to-Savannah example. In this sense, it was probably misguided for the Justice Department to have initially blocked the merger of US Airways and American Airlines on the grounds that it would leave only three network carriers in the market, asserting that “competition from Southwest, JetBlue or other airlines would not be sufficient to prevent the anticompetitive consequences of the merger.” The department also contended that “Southwest, the only major non-network airline, and the other smaller carriers have … business models that differ significantly from the legacy airlines.” While it is true that Southwest’s modus operandi initially differed from those of American, United and Delta, it was certainly competitive with them – the term “the Southwest effect” is not without content. For that matter, Southwest’s business model has gradually converged with that of other big carriers. It now includes a frequent-flier program that makes regular patrons think twice before flying another carrier, has de facto different classes of service and routes some 40 percent of its passengers through hubs. As is so often the case, however, the devil is in the details. One could argue that overall industry concentration is largely beside the point because competition on individual origin-destination city pairs determines the degree of market power confronting individual fliers. In some contexts, the long-term

prospects for the level of competition that best serves consumers in the domestic airline market are problematic. One important outcome of the recent wave of mergers and the general tightening of supply has been a decline in the number of flights offered. The deregulation of the 1970s led to an explosion in service powered both by entry from new carriers and the network magic offered by hub-and-spoke systems. But the forces driving growth in service tailed off and went into reverse in 2007. The largest 29 airports in the United States lost nearly 9 percent of their scheduled flights between 2007 and 2012. More ominously, small and medium-sized airports lost 21 percent and 26 percent, respectively. This is not to say that most American travelers are now inadequately served. As a cursory glance at one of the online travel search engines makes clear, the hub-and-spoke systems of the carriers still ensure a high level of flight frequency and convenience for large and medium-sized markets. But the decline in flights is associated with a reduction in the number of carriers flying the typical spoke, raising the prospect that individual airlines will be able to exercise market power on more routes. And, while the primary risk for most travelers from the decline in flying frequency is higher fares, passengers to and from smaller markets also risk major inconvenience. That brings us back to the real subject of this article: cabotage.

what’s in a word? Cabotage rights for foreign airlines – allowing foreigners to provide purely domestic service – are hardly ever granted. Indeed, the only big exception is the European Union, which has created an open market of well over 400 million people. Airlines from member countries can establish themselves anywhere in the EU,

First Quarter 2016

41


ai rli n e de regu lation provide service between any airports they wish and set fares at their own discretion. Thus, Air France can provide service between Munich and Berlin, while Lufthansa can fly from Marseille to Paris. More to the point, cabotage has opened the EU to innovation that has dramatically extended service to smaller markets, even as it lowered fares. For example, Ryanair (now the world’s largest carrier of international passengers) flies between Trieste and Trapani, while easyJet (a British-based carrier) flies from Paris to Toulouse. But in the United States, the prohibition on foreigners providing domestic service is almost total. Air France is not just barred from operating a flight between Los Angeles and Boston; it cannot sell a ticket between those cities even if the flight continues on to Paris. Nor, for that matter, could Air France buy a U.S. carrier that flies from Los Angeles to Boston. The law does allow for exceptions to those prohibitions, but don’t hold your breath. The Department of Transportation can grant a foreign airline an exemption for up to 30 days, if and only if there is an emergency that can’t be managed by domestic carriers. Sky-high fares and airline strikes, by the way, don’t count as emergencies. Airline cabotage prohibitions are sometimes defended on the same grounds as maritime cabotage prohibitions: national security concerns. The reality is more prosaic. Domestic airlines and their unions don’t want to compete with foreigners today any more than General Motors and the United Auto Workers did in the 1980s.

cabotage to the rescue? Few economists – or for that matter, anyone else outside the airline industry – think cabo42

The Milken Institute Review

tage restrictions promote the interests of consumers. Indeed, the remarkable success of the end of restrictions within Europe in terms of fares and flights – especially in smaller markets – is testament to the cost consumers in the United States bear for the lack of it. One, of course, might ask why foreign carriers would be willing to fly to places their U.S. counterparts eschew. While it is quite possible that services over shorter, thinner routes could not be justified on economic grounds, there is no solid way of confirming that theory in a market in which foreign carriers are denied access. Indeed, the success of Ryanair, easyJet and a number of other startups in serving European cities avoided by the legacy carriers suggests that the burden of proof ought to be on the opponents of liberalization. There are a number of ways in which cabotage restrictions could be relaxed and maybe eventually make more radical reform possible. One politically attractive option would be to allow foreign carriers to bid for publicly subsidized services that come under the Essential Air Services and the Small Community Air Service Development Grant programs, on the premise that they are providing “emergency” services or fostering local economic growth. The Essential Air Services program was created in 1978 as a sop to communities that (sometimes correctly) feared airlines would desert them once regulators could not require carriers to link them to the national transportation network. Under the program, the Department of Transportation offers subsidies in return for providing service to orphaned markets. Critics have long argued, however, that the need is not great – some subsidized airports are less than an hour’s drive from unsubsidized ones – or that the subsidy is a giant waste of taxpayers’ money because so few people make use of the flights.


First Quarter 2016

43

Šdavid pollack/corbis


44

The Milken Institute Review

klein, david (1918-2005)/private collection/bridgeman images


hugh syme

The small-communities program, which can also involve financial assistance for marketing programs, additional personnel, studies and aircraft acquisitions as well as flight subsidies, has been criticized for failing to achieve much. A study at MIT, for example, found that of the 115 such grants awarded between 2006 and 2011, less than 40 percent met their primary objectives. Cabotage to the rescue? The catch here is that opening only routes covered by the two subsidy programs wouldn’t have much impact on the market. The air carriers that provide similar services in very small markets in Europe generally have high costs and receive concomitantly high subsidies. Widerøe’s Flyveselskap, the main supplier of Norway’s “social-service-obligation” flights, has costs per passenger-mile that are over 13 times the average of Ryanair and seven times the average of Southwest. Ryanair, a possible low-cost entrant, currently uses only Boeing 737-800 aircraft with 189 seats, while easyJet uses Airbus A319s and 320s with 156 and 180 seats, respectively – much larger than the 19-seat maximum requirement of the Essential Air Services program. These fast-on-their-wings carriers might find ways to adapt to small-scale service at reasonable cost, but it would be a stretch and they would need motives to try. Note, too, that the program’s routes don’t form networks on their own, but serve as independent links that do not fit any of the linear, hub-and-spoke, or radial route models that create the prospect of commercial success for short-haul operations. Currently, the program’s domestic carriers often treat such routes as appendages to other (potentially profitable) parts of their networks – a circumstance that would not apply to foreign airlines if they were not permitted to set up their own hubs.

Small-communities-program financing has, in some cases, been bolstered by the private sector through revenue guarantees, guaranteed minimum ticket purchases and other kinds of support. This may be seen as risksharing between the community, the federal government and the airline. But again, it is difficult to see how opening such routes for non-U.S. carriers without added accommodations would catalyze any significant change in the national market for air services.

Very small markets are not the only ones, though, that have been hurt by declining service during market consolidation. One major concern is cities that lost as much as twothirds of their flights – notably, Cleveland, Memphis, Pittsburgh, St. Louis and Milwaukee – when they were demoted from hubs to airports that must rely solely on the traffic they originate. Clifford Winston of the Brookings Institution suggests that these former hubs be opened to foreign carriers, eliminating cabotage restrictions between the hubs and any other city that has lost some flights in the domestic consolidation. This would not only serve to succor these locations, but may also be seen as an experiment in finding out whether foreign carriers would enter the market and with what consequences. The difficulty here is that foreign airlines, even if they are more efficient than their U.S. counterparts, would face serious risks in restoring traffic to hubs abandoned by domestic

First Quarter 2016

45


ai rli n e de regu lation carriers. Foreign carriers could make some use of these hubs as a source of international traffic. But most major foreign carriers already belong to global strategic alliances (Star Alliance, SkyTeam, Oneworld) that funnel traffic from U.S. network carriers. There is also a more fundamental issue here. A selective end to cabotage restrictions would again give regulators authority to decide who flies where and how often. Besides potentially being the thin end of the wedge toward more extensive re-regulation, managing a partially regulated route structure efficiently would require considerable knowledge of the network economics of the industry. When an airline dehubs an airport, it seldom withdraws all services and, in addition, other domestic carriers often take up some of the slots vacated. Thus the regulator responsible for drawing borders between domains of domestic airlines and potential foreign entrants would have to make judgments concerning which airports fall into each category. The airline industry is nothing if not dynamic and, judging by pre-1978 experiences, regulatory agencies are not fleet enough to create a reasonable facsimile of an efficient outcome. That’s assuming that they would even be motivated to try; political interference would seem inevitable. A legally separate, but operationally entwined, issue is the matter of foreign investment in U.S. airlines. The general rule since 1938 has been that foreigners may own up to 49.9 percent of the stock in an airline, but their voting shares must be capped at 25 percent. Moreover, the CEO and at least twothirds of the directors must be U.S. citizens. The rationale (as with cabotage restrictions) is nominally military: domestic airlines must commit to providing airlift capacity on a few days’ notice. But the need to maintain a

46

The Milken Institute Review

formal Civil Reserve Air Fleet does not bear scrutiny well. The Government Accountability Office points out that such use has seldom been activated – the first time was in 1990, as part of Desert Storm – and when it was, only a very small part of the available supplementary fleet was drafted. It’s uncertain, moreover, whether allowing greater foreign investment in the U.S. commercial fleet would improve or weaken the reserve capacity. It would probably enhance the commercial viability of the U.S. passenger civil-airline industry by opening more sources of finance and allow for more integrated services with foreign alliance partners.

shock therapy? The various paths of gradualism outlined would be difficult to manage and would not contribute much to sustaining competition in the partially protected domestic market for air travel. There are also lessons to be learned from elsewhere about the potential pitfalls of gradualism in liberalizing the market. In particular, one can compare the admittedly destabilizing impact of the 1978 Airline Deregulation Act, which brought benefits for U.S. travelers almost immediately, with the tortuous road to competition (and consumer satisfaction) in Europe that was fought every step of the way by the incumbent flag carriers. A complete removal of cabotage restrictions (retaining, of course, safety and security regulations) would avoid all the game playing that would inevitably complicate gradualism. The exact effects are impossible to predict. If they weren’t, we would not need free markets to achieve efficient resource allocation. My educated guess: most travelers would benefit as more choice was introduced, though the incremental impact would not be as great as the 1978 reforms. The simplest change, allowing foreign


hugh syme

long-haul carriers to feed their international routes with hub-and-spoke systems that could haul domestic traffic as well, would make a difference – but probably not all that much difference. For one thing, much of the potential gain is already being exploited by domestic carriers that feed traffic to their foreign-alliance partners. One would expect some use of “consecutive cabotage” – as in carrying passengers from, say, Los Angeles to Philadelphia on flights terminating in London – especially by foreign long-haul carriers not allied with U.S. legacy airlines. But even here, long-haul routes require types of aircraft not conducive to shorter-haul operations, and scheduling operations is challenging, given time zone differences. Changing the gauge of aircraft between longhaul and domestic services would require significant investment, with the foreign carrier either setting up a domestic network or acquiring one from an existing operator. In the longer term, however, consecutive cabotage may act as some check on the market power of incumbent U.S. airlines. The benefits from an end to cabotage restrictions would most likely be greatest in smaller markets from which U.S. carriers have largely withdrawn. This would take time, since new entrants would need to define their strategies. But there is little question that opening the door to well-funded carriers with decades of experience in diverse markets would eventually pay off for consumers.

the not-quite-impossible dream Can one imagine circumstances in which Congress would reform cabotage rules and make the changes sufficiently broad to ensure significant market entry? Maybe.

For one thing, the decline in service and rise in fares since consolidation – notably in small markets that can ill afford the loss – have generated anger among the public and unease among policy nerds about the longterm viability of airline competition. It has yet to be mobilized around a specific proposal for change, but cabotage reform could be that catalyst. For another, increasing airline competition is one of the few issues on which the political left and right can agree, especially since it would cost the federal government nothing. Indeed, it could be linked to an end to existing subsidies for small markets.

Domestic airline interests would hardly give in easily. But one could imagine changes that would soften the blow – for example, linking an end to restrictions for EU carriers in the United States to an opening of the internal European market to U.S. carriers. The same could also be said about reciprocal agreements to allow cabotage in both the Canadian and U.S. markets. Moreover, it’s worth noting that U.S. carriers, which have worked long and hard to defend consolidation as critical to the health of airline competition, have lost the political and moral high ground in opposing yet more competition. Cabotage reform still seems like a long shot. But then, deregulation seemed a long shot when, nearly four decades ago, Ted Kennedy reached across the aisle (and into the regulatory agencies) to build a coalition strong enough to break the grip of airline regulation.

First Quarter 2016

47


tk

R 48

The Milken Institute Review


R

Rising inequality in income and wealth is certainly the preoccu­ pation du jour in much of the world these days, as politicians and policymakers struggle to manage public discontent with widening gaps between rich and poor. In fact, while we know that inequal­­ ity is soaring and we have pretty good ideas

why, we know relatively little about the impact – in particular, of inequality in wealth.

Our limited understanding of wealth inequality stems in part from the fact that it’s much

Billionaires and Growth by sut i rt ha bag ch i a n d ja n svej nar

easier to define personal wealth (household assets minus debt) than to measure it. Few countries have solid statistics on household wealth that allow comparisons over time. Moreover, economic theory offers widely diverging conclusions on the links (if any) between inequality and economic performance.

©jeffrey blackler/alamy

On one hand, growing inequality has been hailed as a necessity if the engine of global economic convergence is to be sustained. On the other, it is seen as an acid eating away at the fabric of social cohesion that is increasingly becoming a drag on development.

First Quarter 2016

49


billionaires and growth We think it can be either – much depends on the dynamic driving the widening gap – and we have statistical evidence to back this up. But we get ahead of ourselves. First, an update on some basics.

the numbers While some countries have good sources of data on household wealth (such as the Survey of Consumer Finances in the United States), few outside the industrialized countries of the Organisation for Economic Co-operation and Development have produced the rich microdata needed to understand the nature and consequences of wealth distribution. Even highly developed countries find it a vexing challenge to construct the historical measures of wealth needed to link changing wealth distribution to broader economic outcomes. That explains why some of the best economists out there, including Emmanuel Saez of the University of California (Berkeley) and Thomas Piketty of the School for Advanced Studies in the Social Sciences, France’s leading research institution in social sciences, are devoting considerable energy to studying historical wealth distributions in a variety of countries, along with their societal implications. Moreover, recent initiatives aim to improve the availability and cross-boundary comparability of household-wealth data. These include the Luxembourg Wealth Study, the Eurosystem Household Finance and Consumption Survey and the Global Wealth Reports and Databooks. While these efforts to create data on wealth distribution are relatively recent, social scienS UTI RTH A BAG C H I and JAN SV EJ NAR teach economics at Villanova University and Columbia University, respectively. A comprehensive, technical version of their analysis was published in the August 2015 issue of the Journal of Comparative Economics.

50

The Milken Institute Review

tists from a variety of disciplines have been mulling over the import of changing wealth inequality for quite a long time. Indeed, over the past several decades, a voluminous theoretical and empirical literature has emerged examining the relationship between inequality and economic growth. On the theoretical front, some academics have proposed that inequality can propel growth because the rich are able to save the most and thus can afford to finance productivity-enhancing investment. Or, coming at it from the other side, the taxes and redistributive programs that are typically used to reduce inequality undermine incentives to work, save and innovate. By contrast, some theorists have argued that inequality is an inherent drag on growth because it prevents the poor from obtaining the education and training needed to spur productivity, or because very high levels of inequality lead to political instability and a lack of respect for privateproperty rights and the rule of law. On the empirical front, researchers have examined the effects of inequality on growth using a variety of approaches over various time periods and for different sets of countries. However, given the paucity of data on wealth inequality, the empirical literature has tended to use data on income inequality as a proxy for wealth inequality, because the numbers are more readily available. Before discussing our contributions in the area of wealth inequality and economic growth it is worth pointing out two fundamental issues that, while obvious on reflection, are often neglected in technical analyses and popular discourse. The first is that typical living standards are more a product of per capita income than of the degree of inequality. Indeed, while Pakistan can lay claim to being one of the most egalitarian economies in the world outside Europe – only Kazakh-


epa/alex hofford

stan has a lower Gini coefficient for family income – it’s a rare person who would argue that living standards are higher there than in, say, Chile – which is one of the least egalitarian nations outside Africa.

People also fail to adequately take account of differences in the sources of inequality. Consider Indonesia and Britain. Although they appear similar on measures of income inequality (their Gini coefficients are 33 and 34,

First Quarter 2016

51


the view from the top of the pyramid In our own research, the first task was to create a measure of wealth inequality that could be applied to a large number of countries over time. To this end, we made use of Forbes magazine’s annual list of billionaires. In 1982, Forbes began curating a list of the 400 richest Americans, called the Forbes 400; in 1987, it expanded that effort by compiling a list of bil-

52

The Milken Institute Review

lionaires from around the world. Although other news sources have started publishing the names and estimated wealth of billionaires in more recent years, the Forbes list offers the advantage of consistency over both time and a relatively large number of countries. Using the Forbes data, we generated three measures of wealth inequality for each country for four years that are spaced four to six years apart from each other – 1987, 1992, 1996 and 2002. We added up the wealth of all the billionaires in a given country in a given year and divided it by either the country’s GDP, its physical capital stock or its population, thereby adjusting billionaire wealth for country size.

reuters/stringer

respectively), they differ on such dimensions as the role that competition – as opposed to back-scratching within the elite – plays in achieving economic success and determining the distribution of income and wealth.


The use of billionaire wealth alone as a proxy for wealth inequality may seem a bit arbitrary. And, of course, it is: we used it only because direct measures of wealth inequality for many countries over a period of 15 years simply aren’t available. But happily, these measures of wealth inequality do correlate well with the conventional (but far more limited) data on wealth inequality that exists for about 25 countries – in particular, the share of wealth held by the top decile of the population and the Gini coefficient of wealth. Note, moreover, that focusing on the effects of concentration of wealth at the very top of the distribution pyramid is timely, because it is widely believed that this is where most of the change in distribution has taken place in the past few decades. Simply eyeballing the billionaire data confirms this assumption, though with less than perfect uniformity. Of the 23 countries that had billionaires on the Forbes lists in both 1987 and 2002, our measures of wealth inequality increased in 17 countries and fell in six over this period. When we introduced these measures of wealth inequality into a statistical model to estimate the impact of wealth inequality on economic growth, we found that, controlling for other influences on economic growth, wealth inequality is negatively related to economic growth. In other words, the higher the proportion of billionaire wealth in a country, the slower that country’s growth. (By the way, all three measures of billionaire wealth generate the same conclusions.) The negative impact is both statistically significant – that is, very unlikely to be caused by chance – and quite large. An increase of one standard deviation in the level of wealth inequality (a 3.7 percent increase) would cost a country about half a percentage point in the growth of real GDP per capita. That is a big

impact, given that average real GDP per capita growth in the sample countries was in the neighborhood of 2 percent per year. In contrast to the effect of wealth inequality, neither income inequality (as measured by the Gini coefficient) nor the rate of poverty (here, the portion of the population living on less than $2 a day) has much independent effect on the growth rate. On reflection, this finding fits current views of how development takes place. In countries with the most extraordinary rates of economic growth – for example, China – income inequality also grows rapidly as the opportunities associated with specialized skills and entrepreneurial acumen expand at breakneck speed.

a tale of two billionaires Things get really interesting when we differentiate the sources of billionaire wealth in our wealth-growth statistical model. We went through the lists of the Forbes billionaires for the four years of our sample and placed each billionaire in one of two categories: those who acquired their wealth due to political connections and those who did not. This is harder to do than it might seem because it is difficult to find instances of billionaires who are not courted by politicians and who do not have ties to the political elite. The question we asked ourselves, however, was whether the political connections existed prior to the individual becoming a billionaire rather than whether the connections followed once the person had attained the status of billionaire. Moreover, even if the political connections existed prior to the individual becoming a billionaire, we asked whether there was any reason for us to believe that these political connections materially contributed to the individual becoming a billionaire. In the end, we used a very conservative standard in classifying people as politically

First Quarter 2016

53


billionaires and growth connected, only assigning billionaires to this group when it was clear that their wealth was a direct product of political connections. We uncovered some recurring examples – for instance, we classified a large proportion of the billionaires who emerged in Indonesia during the autocratic regime of General Suharto as politically connected. During that exceptionally corrupt period, ties to the regime were indispensable for securing business licenses. And many of the billionaires on the Forbes list benefited from government aid along with policies that put potential competitors at a strong disadvantage. Russia’s oligarchs provide another case in point. Russia did not have any billionaires on the Forbes list until the mid-1990s, when the country began to privatize giant state-owned enterprises, mostly in fuels, fuel distribution and hard-rock mining. A few employees and managers of these natural-resource enterprises who were in the right place at the right time (and knew the right people) were able to acquire control of these companies at bargain-basement prices. Many were individuals being rewarded for supporting Boris Yeltsin’s re-election campaign in 1996; they became billionaires overnight by dint of their political connections. In the following decades, the crony capitalism promoted by Yeltsin produced spectacular wealth inequality and made the difficult task of creating a competitive market economy capable of balanced growth virtually impossible. Our classification of billionaires into politically connected and unconnected is obviously somewhat subjective. But so are most measures of corruption and other qualitative phenomena that affect the pace of economic growth. As a check, we examined whether our measure of politically connected wealth inequality, defined as the sum of the wealth of

54

The Milken Institute Review

all politically connected billionaires, normalized by GDP, correlated with well-known measures of corruption. We found that countries that are more corrupt according to the International Country Risk Guide of the University of Maryland are also countries that have a higher fraction of their wealth controlled by politically connected billionaires. Similarly, for the five countries with the highest level of politically connected wealth inequality in our sample (Malaysia, Colombia, Indonesia, Thailand and Mexico), the median ranking on the Transparency International’s Corruption Perceptions Index was 94th out of 174 countries. (The higher the number, the lower the ranking.) In contrast, for the six countries that had billionaires in every year of our sample and yet had no politically connected billionaires in any year (Britain, Hong Kong, the Netherlands, Singapore, Sweden and Switzerland), the median ranking was a squeaky-clean eight.

the price of privilege Now for the punch line. Remember that billionaire wealth taken in the aggregate has a negative impact on growth in our statisticalregression estimates. But when we split billionaire wealth into the two types, we find that it is the politically connected wealth that drives the earlier result – that is, politically connected wealth has a negative effect on growth, while politically unconnected wealth has no statistically significant impact on growth. In sum, wealth inequality that comes from political connections is responsible for nearly all the negative effect on economic growth that we had observed from wealth inequality overall. This makes a lot of sense. Billionaires who make their money through hard work, innovation (or, for that matter, luck) don’t have much impact on average productivity. But those who make their money through politi-


reuters/mark blinch

cal connections tend to reduce productivity both because they typically prosper by virtue of monopoly power that distorts resource allocation and because they have a strong interest in using their influence to retard innovation by potential competitors. The Birla family of India provides a good example. As a Forbes profile for the family (in 1987) found that: The nationalists who later became free India’s power elite rewarded the Birla family with lucrative contracts. After independence, the Birlas continued their lavish contributions to the ruling Congress Party. So accomplished are they in manipulating the bureaucracy, and so vast their network of intelligence, that they frequently obtain pre-emptive licenses, enabling them to lock up exclusive rights for businesses as yet unborn.

Our research confirms a core tenet of modern development theory: the will and

power to limit the room for politically generated wealth – more generally, what economists call “rent-seeking” – that is, using market power or political influence to collect unearned profits – is critical to economic development. That finding, of course, raises a key question: why do some countries manage to exert the aforementioned will and power, while others do not? The answer is at the nub of modern development economics, which emphasizes the role of institutions. Possibilities range from policy differences in education to social status for women to openness to trade and to urbanization. What our research strongly suggests, though, is that as long as wealth accumulation is not brought about through political connections and dominated by rentseeking, rising inequality is not, in itself, a barrier to economic growth.

First Quarter 2016

55


tk

56

The Milken Institute Review


Financing High-Risk Medical Research A Proposal from FasterCures

tk

By Melissa Stevens

First Quarter 2016

57


In the past few years, the media has showered us with headlines about recordsetting biotech financing – outsized venture-capital rounds, unprecedented public market appetite for IPOs, and robust sector returns. But a closer look suggests there is more froth than substance at this frontier of medicine and science.

those devilish details Life-science venture funds are clearly raising money – and lots of it. In 2015, the mainstays of the sector, among them Flagship Ventures, Atlas Ventures and MPM Capital, took in more than a quarter-billion dollars each. But an analysis of venture data by the journal Health Affairs shows that the sector is becoming more conservative, moving away from funding companies with technologies in earlier stages of development toward those with technologies in later stages. In 2009, the earlylate distribution was 62 percent and 38 percent, respectively. Five years later, only 45 percent of funding was being allocated to early-stage companies. Why the shift? It all comes down to risk – scientific risk, regulatory risk, reimbursement risk. Only 1 in every 10,000 discoveries made at an academic research bench ever ends up in the hands of patients. M E L I S SA STEV E N S is the executive director of the Milken Institute’s Center for Strategic Philanthropy.

58

The Milken Institute Review

Preclinical research is a critical phase in the early-stage R&D process. It is where general scientific knowledge starts to be applied to drug development in preparation for testing in humans. However, only 5 out of 250 compounds will make it through the preclinical stage to clinical trials. And the growing realization of how long the early-stage odds really are has led investors to opt for ventures with assets that are already progressing into later clinical development. An analysis of the trend by Bruce Booth of Atlas Ventures drives home the point. Despite the growth of venture investment, for years the funds have collectively financed only 100150 companies (and only 20-30 true startups) each quarter. What’s more, almost half of the financing in the second quarter of 2015 was allocated to the top-10 deals. So the lion’s share of that increased cash is flowing to a chosen few already approaching the finish line. Life-science venture capital is thus failing us, in the sense that the sector is not diversifying into the wild-card ideas from which true breakthroughs are likely to emerge.

crisis breeds creativity That said, there’s a glimmer at the end of the tunnel. New players and practices are emerging to manage biotech risk more efficiently, making earlier investment more palatable. First, pharmaceutical companies, which have traditionally focused on clinical trials rather than on preclinical research, are starting to partner earlier in the R&D process to find assets to fill their clinical pipelines. Also, accelerator organizations, like BioMotiv, which is a

chris gash

Even with substantial investment inflows, capital constraints continue to hamstring our ability to advance all of the truly novel and potentially life-changing treatments that technology is making possible. There are 10,000 known diseases, yet there are viable treatments and cures for only about 500 of them. Certainly money alone won’t bridge the innovation chasm. But more capital, and more of it directed to early-stage research, would do much to increase medical R&D productivity.


DRUG DISCOVERY 6.0 YEARS

CLINICAL TRIALS

1 APPROVED DRUG

CLINIC

5 COMPOUNDS

1.5 YEARS

250 COMPOUNDS

FDA REVIEW

PRE-CLINICAL

6.5 YEARS

10,000 COMPOUNDS

THE TORTUOUS PATH FROM DISCOVERY TO APPROVAL


financing research key component of the public-private Harrington Drug Discovery Project, are building specialized expertise to help move compounds through early-stage research in a capital-efficient way. Their work prepares technological innovations for clinical testing with the ultimate goal of passing the baton to large pharmas for later stage development. But perhaps the most profound change under way is the entrance of venture philanthropy into life-science finance. Philanthropy accounts for only about 3 percent of total health R&D investment in the United States, but can have an outsized impact because it

Fibrosis Foundation, which strategically deployed $150 million in Vertex Pharmaceuticals over several years that culminated in FDA approval of Kalydeco, the first cystic fibrosis medication that treats the disease itself rather than the symptoms. This foundation funding is creating business incentives to overcome private risk aversion and smooth the journey from R&D to commercialization.

revamped venture for a faster cure The Milken Institute and its FasterCures center, which is focused on accelerating medical solutions, set out to design a new venture ve-

Philanthropy accounts for only about 3 percent of total health R&D investment in the United States, but can have an outsized impact because it can be nimbly deployed to offset risk in early-stage research. can be nimbly deployed to offset risk in earlystage research. Mainstream disease research foundations are flexing their financing muscle. In fact, many organizations in FasterCures’s TRAIN (The Research Acceleration and Innovation Network) group of mission-driven foundations are utilizing their capital to move promising research across key funding gaps. For example, the Juvenile Diabetes Research Foundation committed $5 million to T1D Innovations, a for-profit venturecreation entity focused on developing Type 1 diabetes therapies. The Leukemia & Lymphoma Society has funnelled millions of dollars directly into companies through its Therapy Acceleration Program, and the National MS Society has done the same through its Fast Forward venture-like vehicle. And let us not forget the landmark work of the Cystic

60

The Milken Institute Review

hicle that would harness these trends – appetite for new assets, capital-efficient preclinical development and venture philanthropy used to limit private risk – in order to focus investment on early-stage companies, where it is needed most. The FasterCures Ventures (FCV) model is a blueprint for a financial instrument that brings together investor classes with different interests and aligns disparate risk-reward ratios for each to achieve both competitive private returns and high social returns. The idea is to mix and match three types of investors – market investors, pharmaceutical companies and venture philanthropy – all of which are stakeholders in developing new treatments. Market Investors (Class A). Resources from these investors, which include traditional venture limited partners such as institutional


investors, endowments and large family offices, would put up half the funds. Limited partners are looking exclusively for financial returns – they want the greatest return on their capital for the lowest risk. Pharmaceutical Companies (Class B). These investors would provide 20 percent of the total funding. They seek new compounds that are ready for human trials, the links in the R&D value chain where pharmas have the greatest expertise. This is most valuable to them because, if successfully brought to market, these drugs would bring billions of dollars to their top lines. Philanthropic Investors (Class C). These investors – disease-specific public charities, private foundations and individual philanthropists or families – would contribute 30 percent of the total fund. They are primarily interested in the social returns on their investments, moving potentially life-saving drugs from the preclinical stage. Their focus is on preventing experimental therapies from being shelved for lack of investors with an appetite for risk. They also want their money to go further and thus are interested in mechanisms that afford greater recyclability than grants alone.

windfalls and waterfalls The power of FCV is in the alignment of the interests of the three investor classes. To this end, the “waterfall” of claims against revenues is structured using preferential payouts and capped returns. First, the market investors and pharmaceutical companies would recoup their investments and earn up to a 3 percent internal rate of return (IRR) before the philanthropists got back any of their capital. But the pharmaceutical companies would forfeit any additional returns in exchange for having a first right of negotiation for assets financed by FCV.

Next in line on the revenue waterfall, philanthropic investors would get their capital back. But they would accept a below-market return thereafter – perhaps 1 percent – so that market investors could look forward to more upside. Finally, the residual returns would flow to market investors. One asterisk here: it would make sense to give the philanthropic investors a share of any exceptionally high payout – a “home-run” clause, if you will. The FCV model suggests that, once market investors achieved a 20 percent internal rate of return, additional revenues would be shared by the philanthropists and market investors in proportion to their initial investments. Consider the way the FCV aligns incentives across these three investor classes. Because of the downside protection offered by the venture philanthropists, market investors would be partially insulated from loss. All of their capital would be returned even if the fund lost 30 percent of the capital of the fund and never earned a penny. Moreover, they would have an enhanced upside since the capped returns for the other two investor classes would shift almost all of the additional earnings to them. Meanwhile, the FCV offers pharmaceutical companies a good chance of getting all their capital back plus a modest return; more important, it gives them an inside track on compounds that have made it past preclinical hurdles. Philanthropic investors, for their part, would have a chance to make a nominal financial return. But as important, they would leverage their philanthropic investment by contributing only 30 percent of the fund’s total capital.

testing the fcv We modeled the capital structure in order to understand the economics and thus the

First Quarter 2016

61


financing research feasibility of using FCV to finance preclinical drug development. Multiple steps must be completed before a compound can move into clinical testing. For instance, one must identify which compounds hit their biological target, then optimize their chemical structures to enhance their specificity and minimize toxicity and finally test these compounds in animals to get some sense of how they might perform against human disease. Past experience suggests that up to 40 percent of compounds are likely to fail to make the transition from one preclinical step to the next. Thus, molecules that complete all of these preclinical steps have been stripped of much of the risk of failure and are, of course, more attractive to commercial partners down the line. INVESTMENT FUND SCENARIOS

BEST CASE

AVERAGE CASE

WORST CASE

Assets Developed. . . . . . . . . . 35 . . . . . . . . . . . . 21. . . . . . . . . . . . . . . . 14

Total IRR. . . . . . . . . . . . . . . . . . . . . . . 30.87%. . . . . . 13.14% . . . . . . . . . . . 1.05% Investor Class A IRR . . . . . . . 20.48%. . . . . . 11.36% . . . . . . . . . . . 4.01% Investor Class B IRR . . . . . . . . . 3.00%. . . . . . . . 3.00% . . . . . . . . . . . 3.00% Investor Class C IRR . . . . . . . 13.66%. . . . . . . . 1.00% . . . . . . -100.00%

We modeled the impact of a $100 million fund that would deploy capital over five years. Again, we assume an asset would be licensedin at the time that lead candidates are iden­ tified and would be licensed-out to a pharmaceutical company once the compound was selected for clinical trials or was approved by the FDA for testing in humans. FCV would finance the relevant drug-development work between the entry and exit points. Once a molecule is licensed to a pharmaceutical partner, the company would bear the costs of development through the three phases of clinical trials. Payments back to FCV would be triggered upon completion of

62

The Milken Institute Review

major development and commercialization milestones, including the completion of Phase I, II and III trials, securing an NDA (that is, FDA approval of a new drug application) and completion of first commercial sale. Additionally, a small royalty based on sales of the drug would be paid back to FCV. The financial model was created using industry averages for development costs at each stage, historical success rates to estimate transition probabilities for moving between steps along the preclinical drug development chain and industry averages for meeting milestones in the clinical testing phase. We developed three scenarios (best, average and worst case) to understand the robustness of the model. Under the best-case scenario, we assumed the shortest development times, highest transition probabilities, lowest development costs and highest milestone payments. Under the worst-case scenario, we assumed the polar opposite. The average-case scenario uses the midpoints for each of these variables. As noted above, in our average-case scenario, the $100 million FCV could finance the preclinical development of 21 assets and generate market investor returns of 11.36 percent. Life-science venture capital IRRs have averaged 15 percent over the past decade, so market investors would sacrifice some upside in return for having downside protection provided by the investments of the pharmaceutical and the philanthropic investor classes. The worst-case model highlights the differentiating element of the FCV. Because philanthropic investors would absorb the most risk and take a loss of up to 100 percent of invested capital, market investors’ worst-case yield is still an estimated 4.01 percent IRR. Under best-case assumptions, market investors would garner just over a 20 percent IRR. Yet, because of the home-run clause, philanthropic investors would be able to


In all three scenarios, the pharmaceutical companies would earn what they really need: preferential access

chris gash

to compounds ready for clinical trials. partake in some of the upside of these extraordinary returns. Thus, in this scenario, philanthropic investors could expect to earn an IRR of 13.66 percent. Note that in all three scenarios, the pharmaceutical companies would earn the capped 3 percent IRR along with what they really

need: preferential access to compounds ready for clinical trials.

risks & considerations There’s no free lunch here, of course. First, the fund tie-up – the period in which there would be no capital recovery and no income –

First Quarter 2016

63


64

The Milken Institute Review

Aligning the interests of the three classes of investors would not always be possible. There is an inherent tension between the interests of market investors that would want to manage risk as much as possible through financing a diverse portfolio of compounds and the laser-like attention of the philanthropists most likely to participate. A fund focused on a specific disease, like lung cancer or Alzheimer’s, would appeal to single-purpose foundations or wealthy families touched by a specific medical condition. But the success or failure of assets within individual disease classes is more correlated, which raises the risk of the portfolio as a whole. So, additional consideration would need to be given to diversification strategies that minimize the tension. One approach would be to launch multiple FCV structures across diseases and formulate the capital structure such that philanthropic investors take the first-loss tranche with respect to a specific

chris gash

would necessarily be considerably longer than what biotech investors have come to expect. This is because drug development and testing is a long and arduous process. Currently it takes 10-15 years for an academic discovery to make the journey to prescription pad. Thus, the time to exit is long, as is the time to recoup revenues through the subsequent milestone payments and royalties from commercial sale. We see the FCV as a 20-year fund, compared with 7-10 years for the typical lifescience venture capital fund today. To compensate for the total duration, we assume that returns would be passed through to investors as soon as they are available. So in the average case, market investors would receive returns in years 7 to 20, and pharmas would receive returns in years 7 to 11, while the philanthropist would receive returns in years 12 and 13. Returns would start a year earlier in the bestcase scenario and a year later assuming worstcase conditions.


disease, while market investors participate in the upside across funds in all diseases.

the way forward This type of stacked capital financing has been shown to be effective in attracting new capital to close key funding gaps in other countries. For example, the Israeli Life Sciences Fund (ILSF), whose financial architecture was designed with the help of a Milken Institute Financial Innovations Lab, uses a similar model of preferential returns and first-loss positioning. The ILSF was a response to the flight of lifescience intellectual property to other countries for development. It was established in 2012 to finance the development of both drugs and medical devices within Israel’s borders. The government provided some $50 million as a limited partner. Its funding serves as the first-loss capital through a preferred return scale, which allows for positive returns to other limited partners even if the fund suffers

as much as a 10 percent loss. This structure has proved an attractive proposition for the fund managers (OrbiMed Israel Partners LP), who were able to raise the $172 million on top of the government’s $50 million. We think market conditions are right to pilot the FCV model in the United States. For one thing, there is a deep appetite for alternative assets, and market investors want exposure – albeit predictable exposure – to the life sciences. For another, venture philanthropists, buoyed by some well-publicized successes, seem eager to get into the driver’s seat on drug development. We have an opportunity – really an obligation – to challenge the status quo of the medical research system, including its traditional financing. Through FCV, we can overcome investor silos, align interests and incentives, and direct capital to early-stage research, where the potential gains to society are greatest.

First Quarter 2016

65


tk

Who Pays for Free 66

The Milken Institute Review


Parking? tk

by e re n i nci

First Quarter 2016

67


I

I’m guessing you don’t think much about parking spaces except when you are searching for one. But indulge me by thinking about them a bit now.

E R E N I N C I is an associate professor of economics at Sabanci University in Istanbul.

68

The Milken Institute Review

tures needed for parking almost guarantee that mispricing parking spaces will have substantial consequences on economic efficiency and societal welfare. Here, I examine the implications of two kinds of parking in which costs are routinely paid indirectly. Start with shopping-mall parking. Shoppers may think they don’t pay for mall parking since, with relatively rare exceptions, nobody charges them directly. In fact, the cost of parking is reflected in store rents, and higher rents are reflected in the prices of the goods and services the stores sell. By the same token, city dwellers may think that curbside parking is free in front of their houses. But the value of the parking is capitalized in housing prices.

mall parking According to a survey by the International Council of Shopping Centers and Urban Land Institute, a typical mall in the United States creates four to six parking spaces per 1,000 square feet of gross leasable area. And since those spaces use up more than 1,000 square feet, a typical mall allocates more footage for parking than for actual shopping. Yet, the same survey nonetheless found that parking is free at 94 percent of shopping malls. This latter number may not surprise you because free parking has almost come to be seen as a right. But it was not so obvious to Kevin Hasker of Bilkent University in Turkey, who has been exploring the economics of parking with me because, at first blush, it would seem to make sense to charge parkers for the cost of the service rather than to find another pocket.

previous page: ©skyscan/corbis

For starters, most economic transactions you make depend in part on the ability of someone (you, a long-haul trucker, the UPS delivery person, an ambulance driver, and so on) to park. And stationary vehicles occupy vast amounts of land everywhere in the world. Indeed, a simple back-of-the-envelope calculation suggests that, in the United States, parking takes up more space than the whole state of Massachusetts. In Europe, where cars are smaller and fewer in number, parking still takes up an area half the size of Belgium. Much of the recent interest among city planners and long-suffering urban drivers has focused on the potential for digital technology, market-driven pricing and wireless communications to reduce search times for parking. In San Francisco, a substantial portion of downtown parking meters respond directly to the availability of spaces by changing meter rates. The goal is to ensure there is always some parking available for drivers willing to pay enough and in the process reduce congestion by sharply cutting the number of drivers on the street who are searching for spots at any one time. But San Francisco’s parking experiment is facing serious opposition, especially in residential neighborhoods yet to have meters. Many people, after all, resent the very notion of paying for parking. One way or another, though, someone always pays, often indirectly, in the form of higher prices for something else. Moreover, the enormous amounts of land and struc-


Šdavid r. frazier photolibrary, inc./alamy

One can always come up with behavioral explanations for free mall parking, based on the assumption that shoppers find direct parking fees unacceptable the way people balk at paying for information from the Internet. Alternatively, we might hypothesize that, as long as one mall seeks to lure shoppers with free parking, competing malls must match them in order to attract patrons.

Both explanations have some bite to them, but both are also problematic. The first seems tied to the tautology that people don’t like it because they don’t like it. In any event, behavioral theories raise more questions than they answer. The analogy to free parking in the mall business is no separate table fees at restaurants. In fact, some do charge table fees. A cover charge at a bar is nothing but a fee for

First Quarter 2016

69


f r e e pa r k i n g occupying space in the bar. And many pasticcerias in Italy charge higher prices to patrons who occupy tables than those who choose to gorge on zeppole at the counter. The second explanation is no more satisfying. It begs the question of why inter-mall competition in this “two-sided market” – the mall is selling services to both shoppers and shops, which are interdependent – is reflected in store rents and prices of goods and services, rather than in parking fees. Our own theory approaches the question very differently. One can think of shopping as participation in a lottery of sorts in which shoppers either win (find what they’re looking for) or lose (don’t find it). Shoppers who make the purchases leave the mall satisfied. But, of course, not all mall trips have happy endings: sometimes shoppers leave emptyhanded. Since they “pay” for parking only by making purchases at prices that include an implicit charge for parking, those who buy nothing don’t pay for parking. Thus, in a very real sense, bundling the cost of parking with the price of goods is a form of insurance for shoppers: if they don’t buy, they pay nothing; if they do buy, they effectively pay for the parking of unsuccessful shoppers as well as their own. Our insurance-based theory is surprisingly robust. First, experimental work in economics shows that almost all people are risk-averse even for lotteries over small items, implying they’d rather participate in the parking lottery, in which they either pay a lot for parking or nothing at all, than pay less upfront. Second, if a monopolist mall decides to charge nothing, the addition of a competing mall would hardly increase the mall’s incentive to raise the fee above zero. Consider, too, that free parking remains the rule in spite of the option of “validated”

70

The Milken Institute Review

parking in which merchants pick up the parking costs only of those who do buy something. The aforementioned survey shows that 86 percent of the malls in the United States do not use validated parking, suggesting that mall managers and merchants mostly buy into the insurance approach by charging nothing to shoppers who leave empty-handed. What parking fee do we want malls to charge?

The insurance approach explains why malls want to provide free parking. But what’s good for the decision maker (the mall management) isn’t necessarily good for society as a whole. When the price of one service (parking) is embedded in the price of another good (stuff sold at the mall), economists worry that inefficiencies will arise – that is, the marginal cost of providing the parking won’t equal the marginal benefit to society as a whole. In this case, though, embedding parking costs seems to be good for society, too. The reason is simple. Consider a thought experiment in which malls decide to generate more income to cover the costs of parking. Adding a direct parking fee distorts more than raising rents (and thus mall prices) because it would drive away both kinds of shoppers – those who will end up buying goods (the winners of the lottery) and those who will not (the losers of the lottery). By contrast, raising mall rents (and thus mall shop prices) tends to drive away only the former group.

curbside parking The second example of free parking that isn’t truly free is curbside parking. Although there has been a trend away from free to paid curbside parking almost everywhere as municipalities attempt to generate revenue and ration scarce space, there is still an enormous amount of free curbside parking available.


©design pics inc/alamy

But if you live in a place with free parking, don’t be so sure you’re the lucky one. Our research suggests that at least some of the value of that free parking is reflected in your rent or in the market value of your home. So, the issue is not whether you pay, but how you pay – directly or bundled in the cost of housing. This will all be clearer if we first look at on-

site parking that is sold as a bundle with housing. Most cities impose minimum parking requirements that determine how many parking spaces each new land use must include. In most American cities, developers must provide at least one and, in many cases, two spaces for each housing unit. This is no small deal: after taking into account the space

First Quarter 2016

71


basement garage. But it is safe to say she’ll never use it since she is 77 years old and has never driven a car. If the land-use parking requirement were dropped, the savings would be reflected in the competitive market price of housing. In fact, Michael Manville of Cornell University found that in San Francisco bundled parking increases the average asking price for an apartment by $22 per square foot. Now back to curbside parking. When the parking space in front of your apartment building is free (and reliably available), you are, in essence, using that parking space as your own parking garage. Hence, one would suspect that at least some of the value of that parking space will be included in the price/ rent of the apartment. Istanbul’s crusade against free curbside parking

allocated for ramps and maneuvering, the area occupied by two parking spaces is usually larger than a two-bedroom apartment. My aunt recently bought an apartment that comes with a private parking space in the

72

The Milken Institute Review

The three effects

After the transition from free parking to paid parking, nearby residents are no longer able to use the curbside as their own private garages. This should decrease housing prices in

©duane duholke/alamy

A natural experiment of sorts that took place in Istanbul gave us the opportunity to test the hypothesis. In the past, curbside parking was either free or operated by self-appointed “parking attendants.” The city decided to end free/informal parking at the curb, and for that purpose created a parking company at the end of 2005 that took over the job of managing curbside parking spaces. The company expanded neighborhood by neighborhood, and thus there has been a gradual transition from free and informal parking to paid and formal. This allowed me and my co-authors Ozan Bakis (Sabanci University) and Rifat Ozan Senturk (University of Texas at Austin) to estimate the impact of unbundling curbside parking spaces on housing prices and rents.


those neighborhoods by unbundling. That is, residents used to pay upfront for parking that was effectively bundled with the house price. Now, they must pay separately, which should reduce the market value of the house. We call this the unbundling effect. But, complicating matters, there are two countervailing effects. First, the transition from free to paid parking should reduce the demand for parking, which should mean less cruising for parking and thus less traffic congestion. This effect should make life more pleasant in the neighborhood, which in turn should lead to an appreciation of housing. We call this the reduced-cruising effect. One has to be careful here, though. Higher parking fees do not always reduce traffic congestion; in fact, they may increase congestion by increasing parking turnover – which would tend to reduce housing values. The second countervailing effect is caused by the transition from informal parking to formal parking. In Istanbul, self-appointed parking attendants stood by the road and demanded money to “protect” parked cars. So, the market was really an informal market – one in which contracts between car owner and parking attendant could not be enforced. The parking company established by the city was held to a higher standard, increasing trust in the parking market and tending to increase housing prices. We call this the trust-enhancing effect. The net impact on housing prices depends on which effects dominate, which one would expect to vary by city and perhaps by neighborhood. How rents react to the transition depends on whether landlords have market power (or renters do, thanks to government rent controls), which is also city-specific. If, for example, landlords can exercise market power, they might feel no obligation to reflect the depreciation in property values in lower

rents or to limit rent increases to reflect increases in property values. So, what happened in Istanbul?

Before 2005 the city did not actively enforce the laws against informal parking attendants. This should not be terribly surprising in light of the reality that Turkey can’t seem to prevent a lot of spontaneous appropriation of public property. A prime example is the theft of electricity, which may run as high as 70 percent of power generated in some cities. The informal parking attendants had strong incentives to favor residents over nonresidents. If a resident insisted on complaining to the authorities about “extortion” by a parking attendant, the attendant could easily get into trouble. So, it was wise for them to keep residents pleased. Often, the parking attendants were the janitors, who reserved parking spaces for the residents of the buildings in which they worked by putting barrels or rocks next to the curb. A non-resident, on the other hand, was virtually without legal recourse since he did not even know the name of the guy “selling” the curb space. So, the residents were really favored in pricing over non-residents. How do the three effects play out in Istanbul? When we look at the city as a whole, we find that housing prices decreased by more than $6 per square foot in the neighborhoods where the city started operating curbside parking. This corresponds to a 9 percent decrease in housing prices in these neighborhoods, which is significant. By contrast, our estimates show that rents in these neighborhoods are not statistically different from those in other neighborhoods. This suggests that landlords have market power in the sense that competition did not drive down rents to compensate renters for the loss of their free parking spaces.

First Quarter 2016

73


f r e e pa r k i n g There is also some evidence that people were sufficiently mobile to seek cheaper parking. Car ownership rates increased in all neighborhoods across time throughout our data. However, the increase was greater in neighborhoods where the parking company entered later, and even more in neighborhoods where the parking company had yet to arrive. Europe vs. Asia

Istanbul straddles the Asian and European sides of the Bosporus strait, and the housing market dynamics are quite different on each side. The Asian side is mostly residential and was developed later. Many buildings have onsite parking spaces sufficient to satisfy residents’ demand. These spaces are mostly in the form of open surface lots located around the apartment buildings. Curbside parking is really for visitors. Thus, the ability to use curbside as one’s own parking garage is not as important on the Asian side. The European side, on the other hand, has many districts with narrow streets and few attached garages. Thus, the unbundling effect is extremely important on this side of the Bosporus. When we estimate the impact of the parking change for each side of Istanbul, we find opposite results. On the European side, the market price of housing decreased in the neighborhoods where the city took over curbside parking, but rents remained the same as in neighborhoods with free curbside parking. On the Asian side, however, we find that housing prices increased (by 4 percent) rather than decreased, while rents increased (by 6 percent) rather than remaining the same. Landlords there, it appears, have market power, as rents rose more than property values. Why the increase in housing prices on the Asian side? Because the unbundling effect is less important on the Asian side (remember

74

The Milken Institute Review

that the Asian-side residents usually have their own on-site parking spaces), while the reduced-cruising and trust-enhancing effects combined outweigh it. The overall analysis shows clearly that at least some of the costs of curbside parking are embedded in housing prices, although these parking spaces are not formally bundled with the housing units. So, we get the same result again: The cost of parking is embedded in the price of other things. As a matter of fact, the impact on housing prices and rents is large enough to significantly change the real income of property owners and tenants. On the European side of Istanbul, the transition made housing more affordable while it made housing more expensive on the Asian side.

takeaway A parking space is a temporary home for your car, but a permanent commitment of land. When unpriced, its costs do not go away; rather they are hidden in the price of everything else. If you live in an urban area, you may think that finding a parking space is difficult, and more often than not, it is. However, this is only the tip of the iceberg. A parking space has more far-reaching effects on your welfare than you may assume. Here, I’ve offered two tales of free parking leading to different endings. Shopping-mall parking may appear to be free, but in fact you pay for it every time you buy something at the mall. Happily, though, what looks like distorted pricing serves the broader interests of society as well as those of the mall owners. Curbside parking in front of your house may also appear to be free, but in fact its costs are already capitalized in housing prices and rents. Although I can’t claim the last word on the subject, our estimate hints that free curbside parking produces negative welfare consequences.


b o o k e x c e r p t

The Rise and Fall of American Growth

W

Worried about lagging economic growth?

by robert gordon

Never fear: Silicon Valley will save us. … The certainty that technological change is far

and away the most important driver of economic growth in advanced industrialized economies is at least as old as Robert Solow’s 1956 landmark analysis. Robert Gordon, an economist at Northwestern known for his inclination to rain on his colleagues’ parades,

*published by princeton university press 2016. all rights reserved.

certainly doesn’t dispute the point. But he does argue that there is nothing inevitable about technological change. Indeed, his new book, The Rise and Fall of American Growth,* excerpted here, makes the case that the genie has deserted us – that the slowdown in productivity over the past decade is prelude to a long dry spell in which promised breakthroughs ranging from artificial intelligence to self-driving vehicles will have only a modest impact on living standards. Read it and weep.

— Peter Passell

First Quarter 2016

75


T

Can the future match the great inventions of the past? The epochal rise in the U.S. standard of living that occurred from 1870 to 1940, with continuing benefits to 1970, represents the fruits of the Second Industrial Revolution (IR #2).

Many of the benefits of this unprecedented tidal wave of inventions show up in measured GDP and hence in output per person, output per hour and productivity, which grew

more rapidly during the half-century 1920-70 than before or since. Beyond their contribution to the record of measured growth, these inventions also benefited households in

many ways that escaped measurement by GDP along countless dimensions, including

the convenience, safety and brightness of electric light compared to oil lamps; the freedom from the drudgery of carrying water made possible by clean piped water; the value of human life itself made possible by the conquest of

infant mortality.

The slower growth rate of measured productivity

since 1970 constitutes an important piece of evidence

that the Third Industrial Revolution associated with

We wanted flying cars; instead we got 140 characters. —Peter Thiel

computers and digitalization has been less important than IR #2. Not only has the mea-

sured record of growth been slower since 1970 than before, but the unmeasured improvements in the quality of everyday life created by IR #3 are less significant than the

unmeasured benefits of the earlier industrial revolution.

This chapter addresses the unknown future by closely examining the nature of recent

innovations and by comparing them with future aspects of technological change that

are frequently cited as most likely to boost the American standard of living over the next

few decades. There is no debate about the frenetic pace of innovative activity, particularly in the sphere of digital technology, including robots and artificial intelligence. Instead, this chapter distinguishes between the pace of innovation and the impact of

innovation on the growth rates of productivity.

innovation through history: the ultimate risk-takers The entrepreneurs who created the great inventions of the late 19th century – not just Americans, including Thomas Edison and the Wright Brothers, but also foreigners, such as Karl Benz – deserve credit for most of the achievements of IR #2, which created unprecedented advances in the American standard

76

The Milken Institute Review

of living in the century after 1870. Individual inventors were the developers not just of new goods, from electric light to the automobile to processed corn flakes to radio, but also of new services such as the department store, mail-order catalog retailing and the motel by the side of the highway. Most studies of long-term economic growth attempt to subdivide the sources of


steven guarnaccia


growth among the inputs, particularly the number of worker-hours, the amount of physical capital per worker-hour and the “residual” that remains after the contributions of labor and capital are subtracted out. That residual, defined initially in Robert Solow’s pioneering work of the 1950s, often goes by its nickname, “Solow’s residual,” or by its more formal rubric, “total factor productivity” (TFP). Though primarily reflecting the role of innovation and technological change, increases in TFP also respond to other types of economic change going beyond innovation – for instance, the movement of a large per­ centage of the working population from low-

rectly and induces capital accumulation to create the machines and structures needed to implement new inventions. In addition, innovations are the source of improvements in the quality of capital – for example, the transition from the rotary-dial telephone to the iPhone, or from the Marchant calculator to the personal computer running Excel. The standard technique of aggregating capital input by placing a higher weight on short-lived capital, such as computers, than on long-lived capital, like structures, has the effect of hiding the contribution of innovation in shifting investment from structures to computers inside the capital input measure.

Capital investment itself waxes and wanes depending not just on the business cycle but also on the potential profit made possible by investing to produce newly invented or improved products. productivity jobs on the farm to higherproductivity jobs in the city. To his own and others’ surprise, Solow found that only 13 percent of the increase in U.S. output per worker between 1910 and 1950 resulted from an increase in capital per worker; this famous result seemed to “take the capital out of capitalism.” The usual association of TFP growth with innovation misses the point that innovation is the ultimate source of all growth in output per worker-hour, not just the residual after capital investment is subtracted out. Capital investment itself waxes and wanes depending not just on the business cycle but also on the potential profit made possible by investing to produce newly invented or improved products. As Evsey Domar famously wrote in 1961, without technical change, capital accumulation would amount to “piling wooden plows on top of existing wooden plows.” Technological change raises output di-

78

The Milken Institute Review

This leaves education and reallocation as the remaining sources of growth beyond innovation itself. However, both of these also depend on innovation to provide the rewards necessary to make the investment to stay in school or to move from farm to city. This is why there was so little economic growth between the Roman era and 1750, as peasant life remained largely unchanged. Peasants did not have an incentive to become educated because, before the wave of innovations that began around 1750, there was no reward to the acquisition of knowledge beyond how to move a plow and harvest a field. Similarly, the reallocation of labor from farm to city required the innovations that began in the late 18th century and created the great urban industries to provide the incentive of higher wages to induce millions of farm workers to move. Thus every source of growth can be reduced back to the role of innova-


steven guarnaccia

tion and technological change. The last three decades of the 19th century were the glory years of the self-employed American entrepreneur/inventor. A U-shaped interpretation of entrepreneurial history starts with a primary role for individual entrepreneurs, working by themselves or in small research labs, like Edison’s. By the 1920s, the role of the individual entrepreneur reached the bottom part of the U, as innovation came to be dominated by large corporate research laboratories. Much of the early development of the automobile culminating in the powerful Chevrolets and Buicks of 1940-

41 was achieved at the GM labs. Similarly, much of the development of the electronic computer was carried out in the laboratories of large corporations. The transistor, the fundamental building block of modern electronics and digital innovation, was invented by a team led by William Shockley at Bell Labs in late 1947. The R&D division of IBM pioneered most of the advances of the mainframe computer era from 1950 to 1980. Improvements in consumer electric appliances occurred at large firms such as General Electric, General Motors and Whirlpool, while RCA led the early development of television. But then the process began to climb the right side of the U, as the seminal develop-

ments of the transition from mainframes to personal computers and the Internet were pioneered by individual entrepreneurs. A pivotal point in this transition was the decision by IBM, the developer in 1981 of the first widely purchased personal computer, to farm out not just the creation of the operating system software but its ownership, to two young entrepreneurs, Paul Allen and Bill Gates, who had founded Microsoft in 1975. The Third Industrial Revolution, which consists of the computer, digitalization and communication inventions of the past 50 years, has been dominated by small companies founded by individual entrepreneurs, each of whom created organizations that soon became very large corporations. Allen and Gates were followed by Steve Jobs at Apple, Jeff Bezos at Amazon, Sergei Brin and Larry Page at Google, Mark Zuckerberg at Facebook and many others. The left side of the entrepreneurial “U” is well-documented. The percentage of all U.S. patents granted to individuals fell from 95 percent in 1880, to 73 percent in 1920, to 42 percent in 1940, and then gradually to 21 percent in 1970 and 15 percent in 2000. The decline in the role of individuals occurred not just because of the increased capital requirements of ever more complex products, but also because the individuals who developed the most successful products formed large business enterprises. Edison’s early light bulb patents ran out in the mid-1890s, leading to the establishment of General Electric laboratories to develop better filaments. By the same time, Bell’s initial telephone invention had become the giant AT&T, which established its own laboratory (later known as Bell Labs); by 1915, it had developed amplifiers that made nationwide long-distance telephone calls feasible.

First Quarter 2016

79


steven guarnaccia


Successive inventions were then credited to the firm rather than the individual. Furthermore, a natural process of diminishing returns occurred in each industry. The number of patents issued in three industries that were new in the early 20th century – the automobile, airplane and radio – exhibit an initial explosion of patent activity followed by a plateau. Or, in the case of automobiles after 1925, an absolute decline. Individual inventors flourished in the United States in part because of the democratic nature of the patent system, which allowed them to develop their ideas without a large investment in obtaining a patent; once the patent was granted, even inventors who lacked personal wealth were able to attract capital funding and sell licenses. The failure of the share of patents claimed by individuals to turn around after 1980 appears to contradict the U-shaped evolution of innovation. Instead, that share remains at 15 percent, down from 95 percent in 1880. This may be explained by the more rapid formation of corporations by individuals in the past three decades than in the late 19th century. Though the Harvard dropout Bill Gates may be said to have invented personal computer operating systems for the IBM personal computer, almost all Gates’ patents were obtained after he formed Microsoft in 1975. The same goes for the other individuals who developed Google’s search software and Facebook’s social network.

the historical record: the growth of total factor productivity The overwhelming dominance of the 1920-70 interval in making possible the modern world is clearly evident. Though the great inventions of IR #2 mainly took place between 1870 and 1900, at first their effect was small.

The late Paul David provided a convincing case that almost four decades were required after Edison’s first power station in 1882 for the development of the machines and methods that finally allowed the electrified factory to emerge in the 1920s. Similarly, Karl Benz’s invention of the first reliable internal combustion engine in 1879 was followed by two decades in which inventors experimented with brakes, transmissions and other ancillary equipment needed to transfer the engine’s power to axles and wheels. Even though the first automobiles appeared in 1897, they did not gain widespread acceptance until the price reductions made possible by Henry Ford’s moving assembly line, which was introduced in 1913. The digital revolution, IR #3, also had its main effect on TFP after a long delay. Even though the mainframe computer transformed many business practices starting in the 1960s, and the personal computer largely replaced the typewriter and calculator by the 1980s, the main effect of IR #3 on TFP was delayed until the 1994-2004 decade, when the Internet, Web browsers, search engines and e-commerce produced a pervasive changes in every aspect of business practice. That brings three questions front and center: First, why was the main effect of IR #3 on TFP limited to the 1994-2004 decade? Second, why was TFP growth so slow in the subsequent 2004-14 decade? Third, what are the implications of recent slow recent TFP growth for the future evolution of TFP and labor productivity over the next quarter century?

achievements to date of the third industrial revolution IR #3’s main impact on TFP growth was driven by an unprecedented and neverrepeated rate of decline in the price of computer speed and memory, and by a never-

First Quarter 2016

81


since-matched surge in the share of GDP devoted to investment in information and computer technology (ICT). The mediocre record of TFP growth after 2004 underlines the temporary nature of the late 1990s revival. More puzzling is the absence of any apparent stimulus to TFP growth in the quarter century between 1970 and 1994. After all, mainframe computers created bank statements and phone bills in the 1960s and powered airline reservation systems in the 1970s. Personal computers, ATMs and bar code scanning were among the innovations that created productivity growth in the 1980s. Reacting to the failure of these innovations to boost productivity growth, Robert Solow quipped, “You can see the computer age everywhere but in the productivity statistics.” The best explanation: The gains from the first round of computer applications were partially offset by a severe slowdown in productivity growth in the rest of the economy. The achievements of IR #3 can be divided into two major categories: communications and information technology. Within communications, progress started with the 1983 breakup of the Bell Telephone monopoly. After a series of mergers, landline service was provided primarily by a new version of AT&T and by Verizon, soon to be joined by major cable television companies, such as Comcast and Time-Warner, which offered landline phone service as part of their cable TV and Internet packages. The mobile phone, the major advance in the communications sphere, made a quick transition from heavyweight brick-like models in the 1980s to the sleek small instruments capable of phoning, messaging, e-mailing and photography by the late 1990s. The final communications revolution occurred in 2007 with the introduction of Apple’s iPhone. By 2015, there were 183 million smartphone

82

The Milken Institute Review

users in the United States, or roughly 60 per 100 members of the population. The “I” and the “T” of ICT began in the 1960s with the mainframe computer, which eliminated routine clerical labor previously needed to prepare telephone bills, bank statements and insurance policies. Credit cards would not have been possible without mainframe computers to keep track of the billions of transactions. Gradually, electric memory typewriters, and later, personal computers, eliminated repetitive retyping of everything from legal briefs to academic manuscripts. In the 1980s, three additional stand-alone electronic inventions introduced a new level of convenience into everyday life. The first of these was the ATM, which made personal contact with bank tellers unnecessary. In retailing, two devices greatly raised the productivity and speed of the checkout process: the bar code scanner, and the authorization devices that read credit cards and deny or approve a transaction within seconds. The late 1990s, when TFP growth finally revived, witnessed the marriage of computers and communication. Within the brief halfdecade between 1993 and 1998, the standalone computer was linked to the outside world through the Internet, and by the end of the 1990s, Web browsers and e-mail had become universal. The market for Internet services exploded, and by 2004, most of today’s Internet giants had been founded. Throughout every sector, paper and typewriters were replaced by flat screens running powerful software. Although IR #3 was indeed revolutionary, its effect was felt in a limited sphere of human activity – in contrast to IR #2, which changed everything. Categories of personal consumption little affected by the ICT revolution included food eaten at home and away, clothing and footwear, motor vehicles and motor fuel, furniture, household supplies and appliances.


In 2014, fully two-thirds of consumption expenditures went for services, including rent, health care, education and personal care. But here, the ICT revolution had virtually no effect. A pedicure is a pedicure whether the customer is reading a magazine or surfing the Web on a smartphone. This brings us back to Solow’s quip that we can see the computer age everywhere but in the productivity statistics. The final answer to Solow’s computer paradox is that computers are not everywhere. We don’t eat computers or wear them or drive to work in them or let them cut our hair. We live in dwellings that have appliances much like those of the 1950s, and we ride in vehicles that perform the same functions as in the 1950s, albeit with more convenience and safety. What are the implications of the uneven progress of TFP? Should the lugubrious 0.40 percent growth rate of the most recent 200414 decade be considered the most relevant basis for future growth? Or should our projection for the future be partly or largely based on the 1.02 percent average TFP growth achieved by the decade 1994-2004? There are several reasons beyond the temporary nature of the TFP growth recovery in 1994-2004 to regard those years as unique and not relevant for the next several decades.

could the third industrial revolution almost be over? What factors caused the TFP growth revival of the late 1990s to be so temporary and to die out so quickly? Most of the economy has already benefited from the Internet revolution, and in this sphere of economic activity, methods of production have been little changed over the past decade. The revolutions in everyday life made possible by e-commerce and search engines were already well established – Amazon dates back to 1994, Google to 1998

and Wikipedia and iTunes to 2001. Will future innovations be sufficiently powerful and widespread to duplicate the relatively brief revival in productivity growth between 1994 and 2004? Examination of the evidence does not lead to optimism. The slowing transformation of business practices. The digital revolution centered on

1970-2000 utterly changed the way offices function. In 1970, the electronic calculator had just been introduced, but the computer terminal was still in the future. Office work required innumerable clerks to operate the keyboards of electric typewriters that had no ability to download content from the rest of the world and that, lacking a memory, required repetitive retyping of everything from legal briefs to academic research papers. By 2000, every office was equipped with Web-linked personal computers that not only could perform any word-processing task, but could also perform any type of calculation virtually instantaneously as well as download multiple varieties of content. By 2005, flat screens had completed the transition to the modern office, and broadband service had replaced dial-up service at home. In the past decade, business practices, while relatively unchanged in the office, have steadily improved outside of the office as smartphones and tablets have become standard business equipment. The cable guy arrives not with a paper work order and clipboard, but with a multipurpose smartphone. Product specifications and communication codes are available on the phone, and the customer completes the transaction by scrawling a signature on the screen. Paper has been replaced almost everywhere outside of the office. Airlines are well along in equipping pilots with smart tablets that contain all the information previously provided by large paper manuals. Maintenance crews at Exelon’s six nuclear power

First Quarter 2016

83


stations in Illinois are the latest to be trading in their three-ring binders for iPads. A leading puzzle of the current age is why the near-ubiquity of smartphones and tablets has been accompanied by such slow economy-wide productivity growth, particularly since 2009. One answer is that smartphones are used in the office for personal activities. Some 90 percent of office workers, whether using their office personal computers or their smartphones, visit recreational Web sites during the workday. Almost the same percentage admit that they send personal e-mails and more than half report shopping for personal purposes during work time. Stasis in retailing. Since the development of “big-box” retailers in the 1980s and 1990s, and the conversion to bar code scanners, little has changed in the retail sector. Payment methods have gradually changed from cash and checks to credit and debit cards. In the early years of credit cards in the 1970s and 1980s, checkout clerks had to make voice phone calls for authorization. Then there was a transition to terminals that would dial the authorization phone number, and now the authorization arrives within a few seconds. The big-box retailers brought with them many other aspects of the productivity revolution. Walmart and others transformed supply chains, wholesale distribution, inventory management, pricing and product selection, but that productivity-enhancing shift away from small-scale retailing is largely over. The retail productivity revolution is high on the list of the many accomplishments of IR #3 that are largely completed and will be difficult to surpass in the next several decades. What is often forgotten is that we are well into the computer age, and many Home Depots and local supermarkets have self-checkout lines that allow customers to scan their paint cans or groceries through a standalone

84

The Milken Institute Review

terminal. But except for small orders, doing so takes longer, and customers still voluntarily wait in line for a human instead of taking the option of the no-wait, self-checkout lane. The same theme – that the most obvious uses of electronic devices have already been adopted – pervades commerce. Airport baggage sorting belts are mechanized, as is most of the process of checking in for a flight. But at least one human agent is still needed at each airline departure gate to deal with seating issues and stand-by passengers. Restaurants have largely completed the transition to point-of-sale terminals that allow waitstaff to enter customer orders on screens spaced around the restaurant with no need to make a separate trip into the kitchen with a paper order form. But the waitstaff and the cooks remain human, with no robots in sight. A plateau of activity in finance and banking.

The ICT revolution changed finance and banking along many dimensions, from the humble street-corner ATM to the development of fast trading on the stock exchanges. Both the ATM and billion-share trading days are creations of the 1980s and 1990s. Average daily shares transacted on the New York Stock Exchange increased from only 3.5 million in 1960 to 1.7 billion in 2005 and then declined to around 1.2 billion per day in early 2015. Nothing much has changed in more than a decade. And despite all those ATMs – and a transition by many customers to managing their bank accounts online – the nation still maintains a system of 97,000 bank branches, and employment of bank tellers has only declined from 484,000 in 1985 to 361,000 recently. James Bessen, an economist at Boston University, explains the longevity of bank branches in part by the effect of ATMs in reducing the number of employees needed per branch from about 20 in 1988 to about 13 in 2004.


steven guarnaccia

That meant it was less expensive for a bank to open a branch, leading banks to increase the number of branches by 43 percent over the same period. This illustrates how the role of robots (in this case ATMs) in causing a destruction of jobs is often greatly exaggerated. Bessen also shows that the invention of bookkeeping software did not prevent the number of accounting clerks from growing substantially between 1999 and 2009. Home and consumer electronics. In contrast to the decade or so of stability in procedures at work, life inside the home has been stable for nearly a half century. By the 1950s, all the major household appliances (washer, dryer, refrigerator, range, dishwasher and garbage disposal) had been invented, and by the early 1970s, they had reached most American households. Besides the microwave oven, the most important change has been air-conditioning; by 2010, almost 70 percent of American dwelling units were equipped with central AC. Other significant changes in the home since 1965 were all in the categories of entertainment, communication and information. Television made its transition to color between 1965 and 1972, then variety increased with cable television in the 1970s and 1980s, and finally picture quality was improved with high-definition signals and receiving sets. Variety increased even further when Blockbuster and then Netflix made it possible to rent an almost infinite variety of motion picture DVDs, and now movie and video streaming has become common. For the past decade, homes have had access to entertainment and information through fast broadband connections to the Web, and smartphones have made the Web portable. But now that smartphones and tablets have saturated their potential market, further advances in consumer electronics have become harder to achieve.

Decline in business dynamism. Recent research has used the word dynamism to describe the process of “creative destruction� by which new startups and young firms are the source of productivity gains as they shift resources away from old low-productivity firms. The share of all business firms consisting of young firms (aged five years or younger) declined from 14.6 percent in 1978 to only 8.3 percent in 2011, even as the share of firms exiting (going out of business) remained

roughly constant in the range of 8-10 percent. It is notable that the share of young firms had already declined substantially before the 2008-9 financial crisis. Measured another way, the share of total employment accounted for by firms no older than five years has declined by almost half, from 19.2 percent in 1982 to 10.7 percent in 2011. This decline was pervasive across retailing and services, and after 2000 the high-tech sector experienced a large decline in startups and fast-growing young firms. In another measure of the decline in dynamism, the percentage of people younger than 30 who owned stakes in private companies declined from 10.6 percent in 1989 to 3.6 percent in 2014.

First Quarter 2016

85


86

The Milken Institute Review

portunities are less plentiful and it is harder to gain employment after long jobless spells.

objective measures of slowing economic growth We now turn to objective measures that uni-

steven guarnaccia

Related research on labor market dynamics points to a decline in “fluidity� as job reallocation rates fell more than a quarter after 1990, and worker reallocation rates fell more than a quarter after 2000. Slower job and worker reallocation means that new job op-


formly depict an economy that experienced a spurt of productivity and innovation in the 1994-2004 decade, but that has slowed since then, in some cases to a crawl. Manufacturing capacity. The growth rate of manufacturing capacity proceeded at an annual rate between 2 and 3 percent from 1972 to 1994, surged to almost 7 percent in the late 1990s, and then came back down, becoming negative in 2012. The role of ICT investment in temporarily driving up the growth rate of manufacturing capacity in the late 1990s is well known. Brookings senior fellows Martin Baily and Barry Bosworth have emphasized that if the production of ICT equipment is stripped from the manufacturing data, TFP growth in manufacturing was an unimpressive 0.3 percent per year between 1987 and 2011. MIT economist Daron Acemoglu and co-authors have also found that the impact of ICT on productivity disappears once the ICT-producing industries are excluded. And among the remaining industries, there is no tendency for labor productivity to grow faster in industries that have a relatively high ratio of expenditures on computer equipment to expenditures on total capital equipment. Net investment. The second reason that the productivity revival of the late 1990s is unlikely to be repeated anytime soon is the behavior of net investment (gross investment less depreciation). The ratio of net investment to the capital stock has been trending down since the 1960s relative to its 1950-2007 average value of 3.2 percent. In fact, during the entire period 1986-2013, the ratio exceeded that 3.2 percent average value for only four years, 1999-2002, that were all within the interval of the productivity growth revival. The 1.0 percent value of the five-year moving average in 2013 was less than half of the value in 1994 and less than a third of the 3.2 per-

cent 1950-2007 average. Thus the investment needed to support a repeat of the late 1990s productivity revival has been missing during the past decade. Computer performance. The 1996-2000 interval witnessed the most rapid rate of decline in performance-adjusted prices of ICT equipment recorded to date. The faster the rate of decline in the ICT equipment deflator, the more quickly the price of computers is declining relative to their performance, or the more quickly computer performance is increasing relative to its price. The rate of decline of the ICT equipment deflator peaked at 14 percent in 1999, but then steadily diminished to barely 1 percent in 2010-14. The slowing rate of improvement of ICT equipment has been reflected in a sharp slowdown in the contribution of ICT as a factor of production to growth in labor productivity. The latest estimates of the ICT contribution by French economist Gilbert Cette and coauthors show it declining from 0.52 percentage points per year during 1995-2004 to 0.19 points per year during 2004-2013. Moore’s Law. The late 1990s were not only a period of rapid decline in the price of computer power, but simultaneously a period of rapid change in the progress of computer chip technology. Moore’s Law was originally formulated in 1965 as a forecast that the number of transistors on a computer chip would double every two years. This predicted what actually happened between 1975 and 1990 with uncanny accuracy. Then the doubling time crept up to 3 years during 1992-96, followed by a recovery, a plunge in the doubling time to less than 18 months between 1999 and 2003. Indeed, this acceleration of technical progress in chip technology was the underlying cause of the rapid decline in the ratio of price-to-performance for computer equipment.

First Quarter 2016

87


The doubling time reached a trough of 14 months in 2000, roughly the same time as the peak rate of decline in the computer deflator. But since 2006, Moore’s Law has gone off the rails: The doubling time soared to eight years in 2009 and then returned gradually to four years in 2014. Kenneth Flamm of the University of Texas examines the transition toward a substantially slower rate of improvement in computer chips and in the quality-corrected performance of computers themselves over the past decade. His data show that the “clock speed,” a measure of computer performance, has been on a plateau of no change at all since 2003, despite a continuing increase in the number of transistors squeezed onto computer chips. These factors unique to the late 1990s – the surge in manufacturing capacity, the rise associated decline in the contribution of ICT capital to labor productivity growth, and the shift in the timing of Moore’s Law – all create a strong case that the dot-com era of the late 1990s was unique in its conjunction of factors that boosted growth in labor productivity and of TFP well above both the rate achieved during 1970-94 and during 2004-14. There are no signs in recent data that anything like the dot-com era is about to recur: manufacturing capacity growth turned negative during 2011-12 and the net investment ratio fell during 2009-13 to barely a third of its postwar average.

can future innovation be predicted? What’s in store for the next 25 years? The usual stance of economic historians, notably my Northwestern colleague Joel Mokyr, is that the human brain is incapable of forecasting innovations. He states without qualification that “history is always a bad guide to the

88

The Milken Institute Review

future, and economic historians should avoid making predictions.” He assumes that an instrument is necessary for an outcome. As an example, it would have been impossible for Pasteur to discover his germ theory of disease if Joseph Lister had not invented the achromatic-lens microscope in the 1820s. Mokyr’s own optimism about future technological progress rests partly on the dazzling array of new tools that have arrived recently to create further research advances: “DNA sequencing machines and cell analysis,” “highpowered computers” and “astronomy, nanochemistry and genetic engineering.” One of Mokyr’s central tools in facilitating scientific advance is “blindingly fast search tools” so that all of human knowledge is instantly available. Mokyr’s examples of future progress do not center on digitalization but rather involve fighting infectious diseases, and the need for technology to reduce the environmental damage caused by excess fertilizer use and global warming. It is notable that innovations to fight local pollution and global warming involve fighting “bads” rather than creating “goods.” Instead of raising the standard of living in the same manner as the past two centuries of innovations that have brought a wonder of new goods and services for consumers, innovations to stem the consequences of pollution and global warming seek to prevent the standard of living from declining. I believe the common assumption that future innovation can’t be forecasted is wrong. Indeed, there are historical precedents of correct predictions made as long as 50 or 100 years in advance. An early forecast of the future of technology is contained in Jules Verne’s 1863 manuscript, Paris in the Twentieth Century, in which Verne made bold predictions about


steven guarnaccia

Paris in 1960. In that early year, before Edison or Benz, Verne had already conceived of the basics of the 20th century. He predicted rapid-transit running on overhead viaducts, motorcars with gas combustion engines and streetlights connected by underground wires. In fact, much of IR #2 should not have been a surprise. Looking ahead in the year 1875, inventors were feverishly working on turning the telegraph into the telephone, trying to find a way to transform electricity coming from batteries into electric light, trying to find a way of harnessing the power of

petroleum to create a lightweight and powerful internal combustion engine. The atmosphere of 1875 was suffused with “we’re almost there” speculation. After the relatively lightweight internal combustion engine was achieved, flight – humankind’s dream since Icarus – became a matter of time and experimentation. Some of the most important sources of human progress over the 1870-1940 period were not new inventions at all. Running water had been achieved by the Romans, but it took political will and financial investment to

bring it to every urban dwelling place. A separate system of sewer pipes was not an invention, but implementing it over the interval 1870-1930 required resources, dedication and a commitment to using public funds for infrastructure investment. A set of remarkable forecasts appeared in December 1900 in an unlikely medium: Ladies’ Home Journal. Some of the predictions were laughably wrong and unimportant, such as strawberries the size of baseballs. But enough were accurate in a page-long article to suggest that much of the future can be known. Among the more interesting forecasts: • Hot and cold air will be turned on from spigots to regulate the temperature of the air just as we now turn on hot and cold water from spigots to regulate the temperature of the bath. • Ready-cooked meals will be purchased from establishments much like our bakeries of today. • Liquid-air refrigerators will keep large quantities of food fresh for long intervals. • Photographs will be telegraphed from any distance. If there is a battle in China a century hence, photographs of the events will be published in newspapers an hour later. • Automobiles will be cheaper than horses are today. Farmers will own automobile haywagons, automobile truck-wagons … automobiles will have been substituted for every horse-vehicle now known. • Persons and things of all types will be brought within focus of cameras connected with screens at opposite ends of circuits, thousands of miles at a span. … [T]he lips of a remote actor or singer will be heard to offer words or music when seen to move.

the inventions that are now forecastable Despite the slow growth of TFP recorded

First Quarter 2016

89


since 2004, commentators view the future of technology with great excitement. Economist Nouriel Roubini writes, “There is a new perception of the role of technology. Innovators and tech CEOs both seem positively giddy with optimism.” For their part, Erik Brynjolfsson and Andrew McAfee, authors of The Second Machine Age, assert that “we’re at an inflection point” between a past of slow technological change and a future of rapid change. They remind us that Moore’s Law predicts endless exponential growth of the performance capability of computer chips – but they ignore the fact that chips fell behind the predicted pace of Moore’s Law after 2005. Exponential increases in computer performance will continue, but at a slower rate than in the past, not at a faster rate. Since 2004, the pace of innovation in general has been slower, but it has certainly not been zero. When we examine the likely innovations of the next several decades, we are not doubting that many will occur, but rather are assessing them in the context of the past two decades of fast (1994-2004) and then slow (2004-2014) growth in TFP. The advances that are forecast by Brynjolfsson and McAfee can be divided into four main categories: medical, small robots and 3-D printing, big data, driverless vehicles. It is worth examining the potential contribution of each to boost TFP growth back to the pace achieved in the late 1990s. Medical and pharmaceutical advances. The most important sources of longer life expectancy in the 20th century were achieved in the first half of that century, when life expectancy rose at twice the rate it did in the second half. This was the interval when infant mortality was conquered and life expectancy was extended by the dissemination of the germ

90

The Milken Institute Review

theory of disease, the development of an antitoxin for diphtheria, and the near-elimination of contamination of milk and meat as well as the near-elimination of air- and waterdistributed diseases through the construction of urban sanitation infrastructure. Many of the current basic tools of modern medicine were developed between 1940 and 1980, including antibiotics, the polio vaccine, procedures to treat coronary heart disease and the basic tools of chemotherapy and radiation to treat cancer – all advances that contribute to productivity growth. Medical technology has not ceased to advance since 1980, but rather has continued at a slow and measured pace along with life expectancy. It is likely that life expectancy will continue to improve at a rate not unlike that of the past few decades. There are new issues, however. As described by Jan Vijg, an eminent geneticist, progress on physical disease and ailments is advancing faster than on mental disease, which has led to widespread concern that there will be a steady rise in the burden of care of elderly Americans who are afflicted by dementia. Pharmaceutical research has hit a brick wall of rapidly increasing costs and declining benefits, with a decline in major drugs approved each pair of years over the past decade (as documented by Vijg). Drugs are being developed that will treat esoteric types of cancer at costs that no medical insurance system can afford. The upshot is that over the next few decades, medical and pharmaceutical advances will doubtless continue at modest pace, while the increasing burden of Alzheimer’s care will be a significant contributor to increased cost of the medical care system. Small robots and 3-D printing. Industrial robots were introduced by General Motors in 1961. By the mid-1990s, robots were welding automobile parts and replacing workers in the


lung-killing environment of the automotive paint shop. Until recently, however, robots were large and expensive and needed to be separated from humans for reasons of safety. The ongoing reduction in the cost of computer components has made ever-smaller and increasingly capable robots feasible. Former DARPA executive Gill Pratt enumerates eight “technical drivers” that are advancing at steady exponential rates. Among those relevant to the development of more capable robots are exponential growth in computer performance, improvements in electromechanical design tools and electrical energy storage. Others on his list involve more general capabilities of all digital devices, including exponential expansion of local wireless communications, in the scale and performance of the Internet, and in data storage. As an example of the effects of these increasing technical capabilities, inexpensive robots suitable for use by small businesses have been developed and brought to public attention by a 2012 segment on the TV program 60 Minutes featuring Baxter, a $25,000 robot. The appeal of Baxter is that it is cheap and can be reprogrammed to do a different task every day. But these attributes of small robots are no different in principle from the distinctive advances in machinery dating back to the textile looms and spindles of the early British industrial revolution. Most workplace technologies are introduced with the intention of substituting machines for workers. Because this has been going on for two centuries, why are there still so many jobs? Why in mid-2015 was the U.S. unemployment rate close to 5 percent instead of 20 or 50 percent? MIT economist David Autor has posed this question as well as answered it: machines, including futuristic robots, not only substitute for labor, but also complement it:

Most work processes draw upon a multifaceted set of inputs: labor and capital; brains and brawn; creativity and rote repetition; technical mastery and intuitive judgment; perspiration and inspiration; adherence to rules and judicious application of discretion. Typically, these inputs each play essential roles; that is, improvements in one do not obviate the need for the other.

The complementarity between robots and human workers is illustrated by the cooperative work ritual that is taking place daily in Amazon.com warehouses, often cited as a frontier example of robotic technology. Far from replacing all human workers, the Kiva robots in these warehouses do not actually touch any of the merchandise, but are limited to lifting shelves containing the objects and moving the shelves to the packer, who lifts the object off the shelf and performs the packing operation by hand. The tactile skills needed for the robots to distinguish the different shapes, sizes and textures of the objects on the shelves are beyond the capability of current robot technology. Other examples of complementarities include ATMs, which, as already noted, have been accompanied by an increase, rather than a decrease, in the number of bank branches, and the bar code retail scanner, which works along with the checkout clerk, with little traction thus far for self-checkout lanes. Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory, summarizes some of the limitations of the robots developed to date: “the scope of the robot’s reasoning is entirely contained in the program. … Tasks that humans take for granted – for example, answering the question, ‘Have I been here before?’ – are extremely difficult for robots.” Further, if a robot encounters a situation that it has not been specifically programmed to handle, “it enters an error state and stops operating.”

First Quarter 2016

91


Surely multi-function robots will be developed, but it will be a long and gradual process before robots outside of the manufacturing and wholesaling sectors become a significant factor in replacing human jobs in the service, transportation or construction sectors. And it is in those sectors that the slow pace of productivity growth is a problem. 3-D printing is another revolution described by the techno-optimists. Its most important advantage is the potential to speed up the design of new products. New prototypes can be designed in days or even hours rather than months, and can be created at relatively low cost, lowering one major barrier to entry for entrepreneurs. New design models can be simultaneously produced at multiple locations around the world. 3-D printing also excels at one-off customized operations, such as the ability to create a crown in a dentist office instead of having to send out a mold, thereby reducing the process of adding a dental crown from two office visits to one. 3-D printing may thus contribute to productivity growth by reducing certain inefficiencies and lowering barriers to entrepreneurship, but these are unlikely to be huge effects felt throughout the economy. 3-D printing is not expected to have much impact on mass production and thereby on how most U.S. consumer goods are produced. Big data and artificial intelligence. The optimists’ case lies not with physical robots or 3-D printing but with the growing sophistication and human-like abilities of computers – often described as artificial intelligence. Brynjolfsson and McAfee provide many examples to demonstrate that computers are becoming sufficiently intelligent to supplant a growing share of human jobs. They wonder “if automation technology is near a tipping point, when machines finally master traits that have kept human workers irreplaceable.”

92

The Milken Institute Review

Thus far, it appears that the vast majority of big data is being analyzed within large corporations for marketing purposes. The Economist reported recently that corporate IT expenditures for marketing were increasing at three times the rate of other corporate IT expenditures. The marketing wizards use big data to figure out what their customers buy, why they change their purchases from one category to another and why they move from merchant to merchant. With enough big data, Corporation A may be able to devise a strategy to steal market share from Corporation B, but B will surely fight back with an onslaught of more big data. An excellent current example involves the large legacy airlines with their data-rich frequent flyer programs. The analysts at these airlines are constantly trolling through their big data trying to understand why they have lost market share in a particular city or with a particular demographic group of travelers. Every airline has a “revenue management” department that decides how many seats on a given flight on a given day should be sold at cheap, intermediate and expensive prices. Vast amounts of data are analyzed, and computers examine historical records, monitor day-by-day booking patterns, factor in holidays and weekends and come out with an allocation. But at a medium-size airline, JetBlue, 25 employees are still required to monitor the computers. And the director of revenue management at JetBlue describes his biggest surprise since taking over his job as “how often the staff has to override the computers.” Marketing is just one form of artificial intelligence that has been made possible by big data. Computers are working in fields such as medical diagnosis, crime prevention and loan approvals. In some cases, human analysts are replaced. But often the computers speed up a process and make it more accurate while


steven guarnaccia

working alongside human workers. New software allows consumer-lending officers to “know borrowers as never before, and more accurately predict whether they will repay.” Vanguard and Charles Schwab have begun to compete with high-priced human financial advisers by offering “roboadvisers” – online services that offer automated investment management via software. They use computer algorithms to choose assets consistent with the client’s desired allocation at a cost that is a mere fraction of the fees of traditional human advisers. But this application of artificial intelligence has not yet

made much of a dent in advising high-networth individuals. It has been estimated recently that the combined assets under management by robo-advisers still amounts to less than $20 billion, against $17 trillion managed by flesh-and-blood advisors. Advanced search technology and artificial intelligence are indeed happening now, but they are nothing new. The quantity of electronic data has been rising exponentially for decades without pushing TFP growth out of its post-1970 lethargy, except for the temporary productivity revival period of 1994-2004. The sharp slowdown in productivity growth

First Quarter 2016

93


in recent years has overlapped the introduction of smartphones and iPads, which consume huge amounts of data. These sources of innovation have disappointed in what counts in the statistics on productivity growth: their ability to boost output per hour in the American economy. Driverless cars. The enthusiasm of technooptimists for driverless cars leaves numerous issues unanswered. As pointed out by David Autor, the experimental Google car “does not drive on roads” but rather proceeds by comparing data from its sensors with “painstakingly hand-curated maps.” Any deviation of the actual environment from the preprocessed maps, such as a road detour or a crossing guard in place of the expected traffic signal, causes the driving software to blank out and requires instant resumption of control by the human driver. At present, tests of driverless cars are being carried out on multilane highways, but test models so far are unable to judge when it is safe to pass on a two-lane road or to navigate winding rural roads in the dark. Even if the technology can be perfected, it is unclear how much it can raise productivity. An important distinction here is between cars and trucks. People are in cars to go from A to B, much of it for essential aspects of life, such as commuting or shopping. Thus the people must be inside the driverless car to achieve their objectives; the additions to consumer surplus of being able to commute without driving are relatively minor. Instead of listening to the current panoply of options, including Bluetooth phone calls, radio news or Internetprovided music, drivers will be able to look at computer screens, read books or keep up with e-mail. The use of driverless cars is predicted to reduce the incidence of automobile accidents,

94

The Milken Institute Review

continuing the steady decline in automobile accidents and fatalities that has been occurring for decades. Driverless car technology may also help to foster a shift from nearly universal ownership to widespread car-sharing in cities and perhaps suburbs, leading to reductions in gasoline consumption, air pollution and the amount of land devoted to parking – all of which should have positive effects on quality of life. But none of this will have much impact on productivity growth. That leaves the advantages offered by driverless trucks. This is a potentially productivity-enhancing innovation, albeit within the small slice of U.S. employment consisting of truck drivers. However, driving from place to place is only half of what many truck drivers do. Those driving Coca-Cola and bread delivery trucks do not just stop at the back loading dock and wait for a store employee to unload the goods. The drivers are responsible for loading the cases of Coke or the stacks of loaves onto dollies and manually placing the goods on the store shelves. Remarkably, in this late phase of the computer revolution, almost all placement of individual product cans, bottles and tubes on retail shelves is by humans rather than robots. Thus, driverless delivery trucks will not save labor unless the tasks are reorganized so that unloading and placement of goods from the driverless trucks are taken over by workers at the destination location.

* * * The problem created by the computer age is not mass unemployment but the gradual disappearance of good, steady, mid-level jobs that have been lost not just to robots and algorithms but to globalization and outsourcing, together with the concentration of job growth in routine manual jobs that offer relatively low wages.


institute news

Back-to-School Special In 2016, the Institute breaks new ground in partnership with the World Bank’s International Finance Corporation and George Washington University. Recognizing the urgent need for more efficient capital markets in developing countries, the three institutions have created a nine-month graduate-level program to train regulators and local market organizers. Starting this fall, a group of up-and-coming professionals from African countries will come to Washington for a semester studying finance, followed by an equal amount of time getting real-world experience. The goal is to balance academic rigor with practical training. “Our dream is that in 10 years’ time, there will be a cohort of capital-market leaders in Africa who know one another, and help each other and the region,” explains Staci Warden, head of the Center for Financial Markets.

courtesy of the milken institute

Viva P4C The latest Partnering for Cures conference, held in New York in November, brought together some 800 disruptive thinkers who are engaged in transforming the medical research system. Among the highlighted topics: the future of venture philanthropy, keys to successful R&D partnerships and building smarter patient registries. The meeting examined hot-button issues with a series of panel discussions, workshops and roundtables, and hosted more than 1,000 connections through its unique partnering system. One new feature: mashup conversations between gamechangers across industries, including FasterCures’ Margaret Anderson with the FDA Deputy Commissioner Robert Califf, philan-

thropist and founder of Napster Sean Parker with Rep. Fred Upton (sponsor of the 21stCentury Cures Act), and Richard Pops (CEO of biopharmaceutical company Alkermes) with biotech journalist Luke Timmerman.

Back on Top The Institute’s Best-Performing Cities Index 2015 highlights the sizzling strength of the U.S. tech economy. For two decades Institute researchers have taken an annual x-ray of the strength of metropolitan economies across the country. This year, the top slot was taken by the unofficial capital of Silicon Valley, San Jose, with San Francisco a close second. Overall, California metros secured six of the top25 slots, the most of any state. In comparison, Texas only had three in the top 25, down from seven the year before – testament to the slowing energy economy. The report illuminated how business spending on technology products and services is shaping the urban landscape: “The softer, creative side of high tech is spurring a renewal of many urban cores. Look to San Francisco, Seattle, Denver and even New York to see the extent of this phenomenon.” Download the report free from the Milken Institute website.


lists

National savings rates are a rough measure of the degree to which countries are deferring consumption today in favor of consumption tomorrow. The World Bank has refined the measure, adding education outlays to gross savings, then netting out the depreciation of existing physical capital, the depletion of non-renewable resources and ongoing damage to the environment. Here, I’ve selected three high-income countries, four middle-income emerging-market countries and the whole Sub-Saharan region for comparison. Some of the results are unsurprising. China and India save a whole lot, while Germany tops the United States in thrift. But others are striking: • Russia is only sustaining consumption by depleting energy wealth. • Japan, once the savings champ of Asia, has dramatically retreated as workers retire and young families eschew reproduction. • Judging by its anemic savings rate, Brazil’s long-term growth prospects are likely to remain deeply problematic. • While sub-Saharan Africa can no longer be written off as a disaster in slow motion, resource depletion and environmental damage are still taking a heavy toll. — Peter Passell PERCENT OF GROSS NATIONAL INCOME

U.S.

JAPAN

GERMANY

CHINA

INDIA

RUSSIA

BRAZIL

SUB-SAHARAN AFRICA

Gross savings . . . . . . . . . . . . . . . 17.1%. . . . 21.0% . . . . . . . 25.2%. . . . . . . . 51.5% . . . . . 32.2% . . . . . 25.2%. . . . . . 13.9%. . . . . . . . . . 23.8% Education expenditure . . . . . 4.8 . . . . . . . . 3.3. . . . . . . . . . . . 4.8 . . . . . . . . . . . 1.8. . . . . . . . . 3.1. . . . . . . . . . 3.5 . . . . . . . . . . 5.6 . . . . . . . . . . . . . . 3.6

Less: Consumption of . . . . . . . . . 15.5 . . . . . . 21.0. . . . . . . . . . 17.4 . . . . . . . . . . 18.2. . . . . . . . . 9.9. . . . . . . . . . 5.1 . . . . . . . . 12.3 . . . . . . . . . . . . . . 8.6 fixed capital Energy depletion

. . . . . . . . .

0.8 . . . . . . . . 0.0. . . . . . . . . . . .0.1 . . . . . . . . . . . 1.9. . . . . . . . . 1.5. . . . . . . . 10.6 . . . . . . . . . . 1.8 . . . . . . . . . . . . . . 5.7

Mineral depletion. . . . . . . . . 0.1 . . . . . . . . 0.0. . . . . . . . . . . .0.0 . . . . . . . . . . . 1.4. . . . . . . . . 0.5. . . . . . . . . . 0.7 . . . . . . . . . . 1.1 . . . . . . . . . . . . . . 1.6

Net forest. . . . . . . . . . . . . . . . . . . 0.0 . . . . . . . . 0.0. . . . . . . . . . . .0.0 . . . . . . . . . . . 0.0. . . . . . . . . 1.2. . . . . . . . . . 0.0 . . . . . . . . . . 0.8 . . . . . . . . . . . . . . 1.9 depletion

CO2 damage . . . . . . . . . . . . . . . 0.3 . . . . . . . . 0.2. . . . . . . . . . . .0.1 . . . . . . . . . . . 1.2. . . . . . . . . 1.3. . . . . . . . . . 1.0 . . . . . . . . . . 0.2 . . . . . . . . . . . . . . 0.5

Air pollution. . . . . . . . . . . . . . . . 0.2 . . . . . . . . 0.1. . . . . . . . . . . .0.2 . . . . . . . . . . . 0.4. . . . . . . . . 1.1. . . . . . . . . . 0.4 . . . . . . . . . . 0.1 . . . . . . . . . . . . . . 1.1 damage Adjusted net savings. . . . . . . . 5.1 . . . . . . . . 2.9 . . . . . . . . . 12.1. . . . . . . . . . 30.3. . . . . . . . 19.8. . . . . . . . 10.9 . . . . . . . . . . 3.2 . . . . . . . . . . . . . . 6.7 note: Gross National Income (GNI) = GDP + remittances from abroad source: World Bank, Little Green Data Book 2015

scott roberts

Ants and Grasshoppers


The Milken Institute Review • First Quarter 2016 volume 18, number 1 the milken institute Michael Milken, Chairman Michael L. Klowden, President and CEO

the milken institute review advisory board Robert J. Barro

publisher Conrad Kiechel

Jagdish Bhagwati

editor in chief Peter Passell

Daniel J. Dudek

art director Joannah Ralston, Insight Design www.insightdesignvt.com

Claudia D. Goldin

managing editor Larry Yu ISSN 1523-4282 Copyright 2016 The Milken Institute Santa Monica, California

George J. Borjas Georges de Menil Robert Hahn Robert E. Litan Burton G. Malkiel Van Doorn Ooms Paul R. Portney Stephen Ross Richard Sandor Isabel Sawhill

2016

Morton O. Schapiro John B. Shoven The Milken Institute Review is published quarterly by the Milken Institute to encourage discussion of current issues of public policy relating to economic growth, job creation and capital formation. Topics and authors are selected to represent a diversity of views. The opinions expressed are solely those of the authors and do not necessarily represent the views of the Institute.

Robert Solow

The Milken Institute’s mission is to improve lives around the world by advancing innovative economic and policy solutions that create jobs, widen access to capital and enhance health. Requests for additional copies should be sent directly to: The Milken Institute The Milken Institute Review 1250 Fourth Street, Second Floor Santa Monica, CA 90401-1353 310-570-4600 telephone 310-570-4627 fax info@milkeninstitute.org www.milkeninstitute.org

WE’LL SEE YOU IN 2016 Los Angeles, May 1-4, 2016

Cover: Hugh Syme

milkeninstitute.org | @milkeninstitute | #MIGlobal


larry fisher

On desalination. California’s Rx for global warming?

ken button

On airline cabotage. Let’s invite foreigners to the party.

gene steuerle

On strangulation by entitlement. The kids deserve a break.

melissa stevens

On medical research investment funds. Philanthropy is the missing ingredient.

bob looney

The Milken Institute Review • First Quarter 2016 • volume 18, number 1

In this issue

A Journal Of Economic Policy

On Ethiopia’s economic miracle. Running out of gas?

sutirtha bagchi and jan svejnar On billionaires and economic growth. Malign or benign?

eren inci

On the economics of parking. Somebody pays.

The

Battle for

Cleveland PAGE 36


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.