Regulation Spring 2018

Page 1

Regulation TRUMP ’ S Tariff Th e C at o Re v i e w o f B u s i n e s s a n d G o v e r n m e nt



Will Oil States Protect or Alter Patent Law? P. 44 & P. 48 Why Pay Football Players But Not Kidney Donors?” P. 12 How to Fix Dodd–Frank P. 32 & P. 38 SPRING 2018 / Vol. 41, No. 1 / $6.95

can i r e m Why A olds will h house losers be the

economics with AttituDe “The best students in our program find our ‘economics with attitude’ contagious, and go on to stellar careers in research, teaching, and public policy.” —Professor Peter Boettke Director of the F. A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center

Mercatus Graduate Student Fellowships provide financial assistance, research experience, and hands-on training for graduate students in the social sciences. PhD FellowshiP Train in Virginia political economy, Austrian economics, and institutional analysis, while gaining valuable practical experience to excel in the academy. ADAm smith FellowshiP Are you in a PhD program at another university? Enhance your graduate studies with workshops in Hayekian political economy, public choice, and institutional economics. mA FellowshiP Earn a master’s degree in applied economics to advance your career in public policy. FréDéric BAstiAt FellowshiP Interested in policy? Complement your graduate studies with workshops on public policy analysis.

Learn more and apply at

Bridging the gap between academic ideas and real-world problems

Volume 41, Number 1 / Spring 2018


P. 8

P. 12



Putting 97 Million Households through the Wringer By imposing “safeguard” tariffs, President Trump has delivered corporate welfare at the expense of Americans. By Pierre Lemieux H E A LT H & M E D I C I N E

12 If We Pay Football Players, Why Not Kidney Donors? The risks are lower and the screening process more rigorous for kidney donors. By Philip J. Cook and Kimberly D. Krawiec T R A N S P O R TAT I O N

18 Who Should Pay for Infrastructure? The most daunting impediment in efficient financing is public misperception. By Richard M. Bird and Enid Slack ANTITRUST

24 The Return of Antitrust? New arguments that American industries are harmfully concentrated are as dubious as last century’s pre-Chicago claims. By Alan Reynolds BANKING & FINANCE

32 Handicapping Financial Reform Will President Trump and new Fed chair Jerome Powell share an ambitious vision of reform? By Charles W. Calomiris BANKING & FINANCE

38 Regulating Banks by Regulating Capital Instead of trying to block banks’ ability to make foolish decisions, regulators should require the banks to have ample capital to pay off their creditors. By Michael L. Davis INTELLECTUAL PROPERTY

44 The Patent System at a Crossroads A looming Supreme Court decision could either continue or reverse the erosion of intellectual property rights. By Jonathan M. Barnett

P. 18

P. 32

D E PA R T M E N T S For the Record 02 Prescription for Lower Drug Prices: More OTC Transitions By Sam Peltzman

Briefly Noted 04 Achieving Durable Success in the Fight for Regulation By Sam Batkins and Ike Brannon

In Review 54 Free People, Free Markets Review by David R. Henderson

68 The Failed Welfare Revolution Review by David R. Henderson

56 Basic Income Review by Greg Kaza

70 The Chickenshit Club Review by Vern McKinley

58 Locking Up Our Own Review by Dwight R. Lee

72 WTF: An Economic Tour of the Weird Review by Dwight R. Lee

61 Pope Francis and the Caring Society Review by George Leef 63 America’s Free Market Myths Review by Phil R. Murray

75 Straight Talk on Trade Review by Pierre Lemieux 77 working papers Reviews by Peter Van Doren

65 The Fatal Conceit Review by Pierre Lemieux

Final Word 80 Brother, Can You Legally Spare a Dime? By A. Barton Hinkle


48 Miles to Go before We Sleep Oil States is a fight over the use of administrative tribunals, not intellectual property rights. By Jonathan Stroud


Illustration by Keith Negley

Regulation, 2018, Vol. 41, No. 1 (ISSN 0147-0590). Regulation is published four times a year by the Cato Institute ( Copyright 2018, Cato Institute. Regulation is available on the Internet at The one-year subscription rate for an individual is $20; the institutional subscription rate is $40. Subscribe online or by writing to: Regulation, 1000 Massachusetts Avenue NW, Washington, DC 20001. Please contact Regulation by mail, telephone (202-842-0200), or fax (202-842-3490) for change of address and other subscription correspondence.

2 / Regulation / SPRING 2018

Regulation EDITOR

Peter Van Dor en M A N AG I N G E D I TO R

Thomas A. Fir ey D E S I G N A N D L AY O U T

Dav id Her bick Design C I R C U L AT I O N M A N A G E R


Sam Batkins, Ike Br annon, Art Car den, Thomas A. Hemphill, Dav id R. Hender son, Dw ight R. Lee, George Leef, Pier r e Lemieux, Phil R. Mur r ay EDITORIAL ADVISORY BOARD

Chr istopher C. DeMuth

Distinguished Fellow, Hudson Institute

Susan E. Dudley

Research Professor and Director of the Regulatory Studies Center, George Washington University

William A. Fischel

Professor of Economics, Dartmouth College

H.E. Fr ech III

Professor of Economics, University of California, Santa Barbara

Robert W. Hahn

Professor and Director of Economics, Smith School, Oxford University

Scott E. Harrington

Alan B. Miller Professor, Wharton School, University of Pennsylvania

James J. Heckman

Henry Schultz Distinguished Service Professor of Economics, University of Chicago

Andr ew N. K leit

MICASU Faculty Fellow, Pennsylvania State University

Michael C. Munger

Professor of Political Science, Duke University

Robert H. Nelson

Professor of Public Affairs, University of Maryland

Sam Peltzman

Ralph and Dorothy Keller Distinguished Service Professor Emeritus of Economics, University of Chicago

George L. Pr iest

John M. Olin Professor of Law and Economics, Yale Law School

Paul H. Rubin

Professor of Economics and Law, Emory University

Jane S. Shaw

Board Member, John William Pope Center for Higher Education Policy

R ichar d L. Stroup

Professor Emeritus of Economics, Montana State University

W. K ip Viscusi

University Distinguished Professor of Law, Economics, and Management, Vanderbilt University

R ichar d Wilson

Mallinckrodt Professor of Physics, Harvard University

Cliffor d Winston

Senior Fellow in Economic Studies, Brookings Institution

Benjamin Zycher

John G. Searle Chair, American Enterprise Institute PUBLISHER

Peter Goettler

President and CEO, Cato Institute Regulation was first published in July 1977 “because the extension of regulation is piecemeal, the sources and targets diverse, the language complex and often opaque, and the volume overwhelming.” Regulation is devoted to analyzing the implications of government regulatory policy and its effects on our public and private endeavors.


Prescription for Lower Drug Prices: More OTC Transitions


n their recent Regulation article, David Hyman and Bill Kovacic discuss the pros and cons of having the U.S. Food and Drug Administration consider the effects of regulation on prices as well as on safety and efficacy. (“Risky Business: Should the FDA Pay Attention to Drug Prices?” Winter 2017–2018.) The cons they mention include lack of statutory authority and expertise. One topic their article doesn’t discuss is FDA regulation of the transition of drugs from prescription to over-the-counter (OTC) status. The FDA’s statutory authority here is clear and its claim of expertise is complete. The effect on prices is also clear: every study of the matter shows substantial price reductions when drugs move to OTC. Of course, total cost—including the cost of physician visits and the value of the time and trouble of securing prescriptions— declines even further when the drug moves to OTC. With the pressure mounting for action to restrain drug prices, you might think that speeding up and broadening the OTC transition would be on the FDA’s priority list or the priority list of its critics. But the topic is little discussed by anyone. This neglected area deserves more scrutiny. The FDA has required prescriptions for certain drugs since the 1930s. The legal basis for this requirement is the agency’s statutory authority to regulate drug labels. It decided that adequate instructions for consumer use could not be written for some drugs and that the words “For sale by prescription only” would meet the statutory labeling requirement. This history is important because the ostensible rationale is about consumer information and safety, and the FDA is supposed to weigh the adequacy of one sales channel or the other in meeting information and safety goals. In practice, however, it is the producer of a drug, not the FDA, that initiates the process for prescription–OTC switches. Why should the FDA wait for someone else to initiate this process? The FDA claims competence to decide when an adequate consumer label can be written. I suggest the FDA should periodically review existing drugs for eligibility for OTC sales. I further suggest that when any prescription drug passes certain milestones—x million prescriptions sold over y years with a risk profile similar to, say, ibuprofen or aspirin—there should be a rebuttable presumption that the drug becomes OTC-eligible. It would then be up to the drug’s producer or producers to take advantage of the opportunity. Moving more drugs to OTC status is no free lunch, but it is as close to one as consumers are likely to get in the health care sector. And the FDA doesn’t need any new statutory authority or new kinds of expertise to make it happen.

—Sam Peltzman Ralph & Dorothy Keller Distinguished Service Professor of Economics Emeritus, Booth School of Business, University of Chicago

Cato at your Fingertips he Cato Institute’s acclaimed research on current and emerging public policy issues–and the innovative insights of its Cato@Liberty blog–now have a new home: Cato’s newly designed, mobile-friendly website. With over 2.2 million downloads annually of its respected Policy Analysis reports, White Papers, journals, and more–and it’s all free–the quality of Cato’s work is now only matched by its immediate accessibility.

Visit today at and at

4 / Regulation / SPRING 2018




y virtually any metric, President Trump’s regulatory agenda has achieved nearly unprecedented results. Neomi Rao, the administrator of the Office of Information and Regulatory Affairs (OIRA), has carried out Trump’s one-in, two-out executive order (EO 13771) to the letter, just as his supporters had hoped and detractors had feared.

The data suggest that the Barack Obama administration marked a high point for the regulatory state. In Obama’s last year in the Oval Office, regulators published 116 major rules. (The government formally defines a major rule as one with an economic impact exceeding $100 million, which requires it be subjected to a formal cost–benefit analysis.) That is 16 more than the previous one-year record for major rules, which was also set by the Obama administration in 2010. Rules issued under Obama imposed more than $870 billion in net present value costs according to estimates from the agencies themselves (and compiled by the American Action Forum). These regulations required an estimated 580 million hours of paperwork for firms to comply with the new standards. The paperwork figure is the equivalent of 291,500 employees working full-time to comply with the new rules. The Obama regulatory expansion was unprecedented, whether measured in cost, the number of major rules, or number of billion-dollar rules. It was against this backdrop that Trump promised historic deregulation. He stated numerous times during the campaign that he would cut regulations “massively,” even boasting of a 75% cut—albeit without specifying exactly what that meant. While that may have sounded like unrealistic campaign rhetoric, his administration has reduced the issuance of new regulations by more than 75%. In fact, the Trump administration issued just 33 major regulations in all of last year. SAM BATKINS is director of strategy and research at Mastercard. IKE BR ANNON is a Cato Institute visiting

fellow and president of Capital Policy Analytics. The views expressed in this article are their own.

While this reduction in new regulation has been a welcome reprieve for U.S. business, it is worth asking whether the administration will oversee actually repealing swaths of regulation. Congress made some effort at this early last year when it rescinded 15 regulations via the Congressional Review Act (CRA). However, most of those were minor or had not yet taken effect, so no one could construe this as

wholesale deregulation. What’s more, the narrow Republican congressional majority may completely erode in 2019, which would obviate any additional use of the CRA. What’s more, the courts may soon start considering some of the Trump administration’s delayed, withdrawn, and postponed regulations, which could delay or altogether stop further deregulation efforts. Indeed, several nongovernmental organizations


The historical nature of the regulatory slowdown and retrenchment of 2017 can-

not be overstated. When President Obama left office, he left 116 new major rules in the regulatory pipeline, expecting that they would be implemented under a Hillary Clinton administration. Yet agencies under Trump have finalized only the 33 aforementioned major regulations in all of 2017, eight of which were either deregulatory in nature or published by independent agencies (e.g., the Federal Reserve or the


Achieving Durable Success in the Fight for Deregulation

announced their intent to challenge every step of the Trump deregulatory agenda in court rather than allow an unwinding of the Obama regulatory legacy. If Congress and the courts are unable or unwilling to acquiesce, the Trump administration’s ability to craft a legacy of deregulation will be limited. What’s more, any progressive president who succeeds Trump could immediately undo EO 13771 and reinstate the withdrawn regulations. In sum, to create anything as durable as previous successes in deregulation, Congress must play a greater role in regulatory oversight through both substantive and procedural reforms. Trump may have won 2017, but his regulatory legacy will be determined by what he next ushers through Congress.


Consumer Financial Protection Bureau) outside of the administration’s control. The aggregate compliance costs imposed by all new regulations is a better metric for what actually transpired. In 2016 Obama promulgated more than $164 billion in regulatory costs. Trump’s deregulatory agenda actually reduced compliance costs by more than $900 million just via his EO 13771. The cost savings of the rules rescinded under the CRA add another $3 billion in regulatory cost reductions. This is unprecedented to be sure, but these savings are less than 2% of the estimated cost of the final rules promulgated in 2012 alone. One-in, two-out / President Trump’s onein, two-out directive outraged progressives and initially left conservatives skeptical. The potential for gaming such a regime is high, and absent some sort of comprehensive retrospective cost–benefit analysis there is no reason to think that the order would achieve an optimal reform. Because so few new regulations were issued, there was little two-out repeal of old rules. For instance, during the last nine months of 2017 (a period that avoids the rules rescinded when President Trump entered office), OIRA only concluded review of 46 final rules. As a comparison, in the last nine months of 2009 OIRA released 205 rules, and 207 in April–December 2001. The Trump deregulatory agenda thus far entails issuing as few regulations as legally required, while concomitantly paring some of the controversial rules from the Obama administration—most notably the Clean Power Plan, the overtime expansion, and rules on hydraulic fracturing. Aside from the 15 CRA measures, this hardly counts as durable regulatory reform. The Trump administration appears to understand this reality.


With its past two Unified Agendas, the administration essentially transformed a planning document into a manifesto for its domestic policy agenda, with an entire section devoted to deregulatory actions. The administration has already repealed

at least five major rules and there are an additional 22 deregulatory actions on track to be completed by the end of 2018. While a few billion dollars in deregulatory measures might not move the needle on economic growth, the cumulative effort of the administration on reducing the regulatory burden is the most significant action along these lines since the Ronald Reagan era. The intent of the one-in, two-out edict was to ensure that regulatory compliance costs did not increase in 2017. On this the administration succeeded, managing to reduce costs by nearly $1 billion. For 2018 the administration aims to reduce the cumulative regulatory burden by $9.8 billion, which would entail agencies reducing their regulatory impact by significant margins. For instance, the Departments of Defense and Energy must reduce costs by roughly $1 billion each, the Department of Labor by $1.9 billion, and Interior must slash nearly $3 billion in compliance costs. This effort constitutes more than an ad hoc reform. Of the 579 actions listed in the Unified Agenda, 448—fully 77%—are deregulatory according to the American Action Forum. However, the next administration can quickly reverse course, undo much of this deregulation, and go several steps beyond President Obama’s regulatory expansion. Administration-only action / There is much

President Trump can do on federal regulation between now and his 2020 reelection fight. Rao recently noted that for every new regulatory action, agencies have pledged to promulgate three deregulatory actions—an ambitious goal to be sure, but no more so than the goal of reducing overall regulatory costs was in 2017. It is hard to see Congress or the courts moving to stop such efforts, at least in 2018. Environmental Protection Agency Administrator Scott Pruitt’s work has received the bulk of media attention so far and widespread ire from environmentalists. But the EPA has only reduced regulatory compliance costs by $127 million since he joined the administration. The agency does have 41 active deregulatory actions in

/ Regulation / 5

the pipeline according to the latest Unified Agenda, and three are economically significant, including the repeal of fracking rules. The biggest deregulatory prize to date may come from full repeal of the Clean Power Plan. The administration intends to issue a final rule to that effect in October 2018, but that is just the beginning of the battle to repeal the carbon standards. However, it’s likely that the courts will be enjoined in the debate, stretching this fight out past the 2020 election at a minimum. The Interior Department has 43 deregulatory actions planned, but only two are economically significant and both are vestiges of the Obama administration. The Department of Labor has 22 deregulatory actions planned, with just two economically significant rules. For perspective, the Obama administration issued nine economically significant labor regulations in 2016. Again, while such deregulatory actions may be unprecedented, they are a drop in the bucket and can easily be undone by Trump’s successor. Perhaps far-reaching regulatory reform could truly take shape with a more proactive OIRA. For instance, the administration could require independent agencies to submit regulations for review, which would ensure that all federal agencies’ rules are subject to a rigorous interagency review process and cost–benefit analysis. Such a step would prove more durable than nearly any other regulatory action the administration has taken thus far. A future president may find it tricky to remove such an obvious “good government” requirement. Such a reform would require a significant increase in OIRA staff, which would entail a bona fide investment by the Office of Management and Budget. That would come at a time when the White House has pledged its fealty to reducing federal employment, even if these staffers could ultimately help to reduce regulatory costs. There is some evidence that OIRA is now demanding greater transparency from independent agencies. Administrator Rao has reminded the independent agencies that under the CRA her office has the responsibility of determining the status for

6 / Regulation / SPRING 2018 B R I E F LY N O T E D

major rules, which could ultimately mean that all actions eventually wind up at OIRA for at least some level of scrutiny. This could lay the groundwork for more robust review later in the Trump administration. WHAT CONGRESS SHOULD DO ON REGULATORY REFORM

However, if the Trump administration really wants to reshape the regulatory state, it has to do more than just unwind some of the most controversial parts of the Obama regulatory agenda or make administrative changes to rulemaking procedures. Changes would need to be made to the statutes underlying regulatory regimes, as well as how federal agencies carry out rulemaking. Only Congress can accomplish that. Congress can tackle substantive reform in the same manner that it vastly expanded the regulatory state during the last decade: by reforming financial sector regulations, deregulating the health care marketplace, and modernizing the National Environmental Policy Act and sundry environmental regulations. Broadly speaking, financial services (e.g., Dodd–Frank), health care (e.g., the Affordable Care Act), and environmental controls (e.g., controversial EPA actions outside of Congress) have added the most to the regulatory burden over the last 10 to 15 years, and those areas offer the greatest opportunity for reform. The Trump administration, to a degree, is already proffering ideas for reform in these sectors, but they are piecemeal, subject to intense litigation, and likely to be unwound by a progressive successor. However, given the political will (and the votes), Congress could take the necessary steps to not just unwind some of the progressive achievements of the last decade, but also craft a limited-government vision of how regulation should govern these sectors and the American people broadly. Congress currently has legislation addressing reforms in all three of the abovementioned areas. None of that legislation is as magisterial as, say, the airline deregulation of the late 1970s, but these bills are resilient paths forward nonetheless. For

example, there is bipartisan support to reform Dodd–Frank by drastically raising the threshold for Systemically Important Financial Institution (SIFI) status and modernizing financial rules by ensuring that thousands of banks are exempt from the most onerous aspects of Dodd–Frank. This doesn’t undo the law, but it is a substantive step forward that Congress and the administration can take in 2018. Its reforms would also persist past 2021. In terms of process, there is a host of legislation that the House has already passed, but is now stalled in the Senate, that would revolutionize how agencies implement regulations. The effects of these procedural changes would be felt for decades and arguably have a greater effect than some minor substantive changes. The most popular measure, and for some reason one of the most controversial, is the proposed Regulatory Accountability Act. It would institute a suite of reforms of the administrative process, which hasn’t been seriously altered since Harry Truman was president. For instance, the bill would force agencies to allow public participation far earlier in the rulemaking process, primarily through greater use of advanced notices of proposed rulemaking. In addition, it would require agencies to choose the lowest-cost policy option that achieves the regulatory objective and it would greatly pare back the use of interim final rules, which agencies frequently abuse. There are other ideas we would like to see this Congress consider, such as expanding the scope of regulations subject to cost–benefit analysis, removing the responsibility for performing cost–benefit analysis from the regulatory agency itself and assigning it to OIRA, and making OIRA a completely independent agency akin to the Congressional Budget Office in order to free it from its political overseers in the OMB. However, we fear that this Congress, with one eye firmly on the 2018 election, has concluded its foray into pruning back the regulatory state and will leave further reforms to a future Congress. The substantive and procedural reforms outlined above are hardly radical. They

are achievable in this Congress and some have already passed in the House. Rather than progress through administrationonly action, these reforms would cover the entire administrative state (procedural) and address some of the most overregulated parts of the economy (substantive). To some observers, these efforts might appear trivial in light of the regulatory surge of the last decade, but some perspective is necessary. If passed, they would be the most consequential regulatory reforms of the last decade. If that’s what Congress and the administration want to achieve in 2018, the table is set. CONCLUSION

The extent of the Trump administration’s deregulatory success has been a surprise to many. Few anticipated he would issue one-quarter the number of regulations as his predecessor while concomitantly pulling back proposed rules and even repealing some existing rules. Although conservatives and libertarians are generally pleased, progressives might accurately perceive that this can’t go on much longer. It’s likely the courts will soon enter the fray, leaving the rest of any fight up to the lawyers, not the bureaucrats. What matters most for a durable legacy of reform is not the pace of new regulations issued, but how the Trump administration and Congress change regulatory culture and practice, both through substantive reforms and through process. For the administration, this means extending OIRA’s oversight to independent agencies, increasing its staff, and invalidating a portion of previously issued guidance documents that go beyond what the regulation explicates. For Congress, this means fundamental reform of health care, financial regulation, and environmental rules, in additional to sustainable regulatory process reforms. Oversight of the regulatory state can be difficult. Regulatory proposals can be complicated and the appetite for Congress to weigh in on such narrow matters is fleeting at best. Establishing greater oversight authority with the power to push back on all regulations may be the best we can do.

“It isn’t often that a group of people get to claim that they have changed the world of thinking... and PERC has done that.” — Kimberley A. Strassel, Wall Street Journal Editorial Board

© Bob Wick, BLM CA

OUR VISION is a world where free market environmentalism is the default approach to conservation. To make this vision a reality, our focus will always remain on results. Through high-quality research, outreach, and applied programs, our ideas are changing the world of thinking.

Learn more at

8 / Regulation / SPRING 2018 COMMERCE & TRADE

Putting 97 Million Households through the Wringer By imposing “safeguard” tariffs, President Trump has delivered corporate welfare at the expense of American consumers.



n January, President Trump announced that he is imposing customs tariffs of up to 50% on imported residential washing machines and 30% on solar panels and modules. Authority for those actions comes from the so-called “safeguard” provisions of Section 201 of the Trade Act of 1974. Such safeguard actions do not require any claim of “unfair” trade practice such as dumping or subsidization, but only a finding of “serious injury or the threat thereof” to domestic industry. These measures are meant to give temporary “import relief,” allowing domestic manufacturers “to make a positive adjustment to import competition,” as Section 201 states. Consider specifically the washing machine tariff (on which this article will focus). It is scheduled to be in force for three years, with the initial 50% tariff rate declining by a fifth each year. Aimed mainly at South Korean producers Samsung and LG, the safeguard action will mainly affect workers in Thailand and Vietnam, where the two companies currently manufacture most of their washing machines, but (according to information available at the time of this writing) workers in China and Mexico will also be affected. TARIFFS AND PRICES

Foreign competition in domestic appliances, mainly from Asia, has been a boon for American consumers. Since 2001, when PIERRE LEMIEUX is an economist affiliated with the Department of Management Sciences at the Université du Québec en Outaouais. His new book, What's Wrong with Protectionism? is forthcoming from the Mercatus Center at George Mason University.

China joined the World Trade Organization (WTO), the average price of appliances purchased by American residents has decreased by 22%, while the total Consumer Price Index increased by 41%. Anybody who has bought home appliances over this period has seen the difference in his wallet. Now, the new tariff will benefit a relatively small group of workers and corporate shareholders at the expense of American households. According to IBISWorld, a market research firm, some 2,400 American workers are employed in manufacturing clothes washers and dryers (estimate for 2016). They mainly work at a Whirlpool plant in Clyde, OH and a GE Appliances plant in Louisville, KY. The benefited shareholders hold stock in Whirlpool, which is listed on the New York Stock Exchange, and Haier Group, a Chinese company that purchased GE Appliances in early 2016 and is listed on the Shanghai Stock Exchange. The new tariff will raise washing machine prices in the United States. Indeed, LG quickly announced a $50 price increase. Those increases won’t be limited to LG and Samsung, though; reduced price competition pushes up the price on both the targeted goods and their competitors. There can be only one price on a market, account being duly taken of differences in features and in quality as evaluated by consumers. One way to see this is as follows. If your competitors charge more because they have to pay a new tax, you will spend to increase your own production up to the point where the higher price justifies your higher (marginal) cost. This is why a tariff increases domestic production while it reduces imports. Of course, a company may want to increase market share, but it will not indefinitely



sacrifice profits to do so; and when imports are reduced, domestic producers can increase both market share and profits. Look at it from another viewpoint. Domestic producers of washing machines had to cut their prices in order to meet foreign competition. As this competition softens, they can raise their prices. Economic theory shows that, in the general case (a world market with elastic supply and a relatively small country that adopts the tariff), a tariff is paid not by foreign producers but by domestic consumers, as both foreign and domestic producers increase their prices by the full amount of the tariff. Lower supply calls for higher prices. Although foreign producers have to pay the tariff to the Treasury, payment will be offset by the higher prices they charge to consumers. In the real-world case of Trump’s tariff on washing machines, foreign producers will probably absorb some of the tariff—that is, they will not be able to add all of it to the price they charge American consumers. The reason for this is that the American market for washing machines is a large part of the world market, and lower American demand on this market will push the world price down. Foreign producers will not be able to replace

/ Regulation / 9

American consumers by other consumers around the world and will therefore be willing to accept lower prices on their American sales—which they can do as they produce less and move down on their supply curve. This special case of a large country and market is referred to as the “optimal-tariff” or “terms-of-trade” argument in the economic literature. But there is no doubt that American consumers will pay part of the tariff. Prices will increase or, equivalently, will not decrease as much as they would have otherwise. Goldman Sachs, an investment bank, forecasts that the price of washing machines will increase by 8%–20% during the first year. This is probably an underestimate and it would not be surprising if prices increase by at least 25%. POOR ECONOMICS

Some 97 million American households have washing machines. Households that replace or add those appliances in the coming years will pay a large part, if not most, of the new tariff. The U.S. Trade Representative’s press release on the tariff explains that “the Trump Administration will always defend American workers, farmers, ranchers, and businesses in this regard.” The

10 / Regulation / SPRING 2018 COMMERCE & TRADE

press release did not mention American consumers of washing machines, who constitute most of the U.S. population. Technically, the safeguard measures take the form of a “tariffrate quota.” The first 1.2 million imported washing machines (less than 40% of current imports) will be charged a 20% tariff instead of 50%. (The tariff also extends to parts for washing machines. The first 50,000 parts in the first year will not be subjected to the tariff, rising to 90,000 in the third year.) A quota is equivalent to a tariff in that they both reduce the quantities imported and lead to higher prices. Under a quota, however, the foreign exporters will receive the “tax” revenue from the “tariff” because they will be charging higher prices for the now-scarcer good. Thus, part of the tariff-quota on washing machines amounts to a transfer from American consumers to the foreign producers. Understanding why this provision was included in the Trump action helps to clarify the nature of protectionism. Tariffs are simply a way to transfer money from some people to others, to rob Peter to pay Paul. (See “Patriotism as Stealing from Each Other,” Winter 2017–2018.) The transfer is generally from consumers to domestic producers, but political reasons sometimes require government to tweak the opaque redistribution. For example, foreign exporters may be less fervent in their opposition if they get part of the loot; a tariff-rate quota action is a good way to accomplish this. Foreign exporters are also more likely to invest in U.S. factories if they can import their parts duty free; this would give politicians a bit of job creation to tout and some ribbon-cutting publicity. Still, Samsung—which recently began producing washing machines at a new $380 million plant in Newberry, SC—is not happy. Neither are South Carolina Republican Gov. Henry McMaster and other state leaders, who criticized the tariff publicly. South Carolina voters went for Trump by a 55%–41% margin over Democratic rival Hillary Clinton in 2016. Officials likely are also unhappy in Tennessee, where LG is building a washing machine plant in Clarksville. Tennessee voted for Trump 61%–35% over Clinton. Whirlpool is the company that, in May 2017, petitioned the U.S. International Trade Commission (ITC) to recommend safeguard tariffs to the president. Whirlpool is a large international firm, headquartered in the United States, with worldwide revenues of $21 billion and net earnings of $928 million (in 2016). According to an IBISWorld report on the American market for clothes washers and dryers, Whirlpool’s market share is 75% of domestic production and 40% of the whole domestic market (of which 53% is served by domestic producers). In its January 25th issue, The Economist relates an interesting fact about Whirlpool: “When, in 2006, it merged with Maytag, a rival, it quelled concerns about its high market share by pointing to competition from abroad.” Now apparently incapable of competing with Samsung and LG on the market, the company has been complaining of its declining market share and protesting to the U.S. government since 2012. In its safeguard petition to the ITC, the company shamelessly complains that import com-

petition pushes prices down and reduces return on investment. The company’s rent-seeking should not be obscured by its profession of faith in “free and fair trade” and “healthy competition,” or by its pretense of “social responsibility.” In a section of its website called “Free and Fair Trade,” the company claims to support “an open global system that benefits our consumers, employees, and the entire home appliance industry.” The last-listed beneficiary betrays the real goal of the exercise. One cannot pursue the duel goals of free competition and anticompetitive privilege. “Fair trade” is a clever way of saying that one does not want economic freedom. In this context, “fair” means only what the special interests say it does. Adding “fair” to “free trade” is like trying to mix oil and water. The Office of the U.S. Trade Representative didn’t use the term “free trade” anywhere in its press release, but it does speak of “fair and sustainable trade.” The use of the term “sustainable” is fascinating for an administration that rejects the concept when it comes to the environment. It appears that any terminology that can advance a political agenda is worth using. Whirlpool writes that its “community focus” is important because “the people who make our appliances are also the people who use our appliances.” This is false of course. The people who make Whirlpool appliances are only a tiny fraction of those who use them. The company has 25,000 employees in the United States, of which less than 5% work in washer manufacturing (extrapolating from IBISWorld’s numbers). But even using the 25,000 figure, those employees’ households amount to less than 0.03% of U.S. households that have washing machines in the United States, and to about 0.01% of households that have Whirlpool-made ones. COSTS AND BENEFITS

The tariff imposed on washing machines is a textbook case of why protectionism wins at the political game even if the winners gain much less than what others lose. Inversely, free trade benefits the vast majority of people much more than the disruption costs that some workers suffer from foreign competition. On one hand, a tiny special interest group—a few thousand workers at most, plus a few executives and major shareholders—lobbies the U.S. government to protect their salaries and profits, which represent a significant benefit for each member of the group. Whirlpool’s petition mentions that “safeguard relief would enable domestic producers to earn a return on their past investments.” On the other hand, the cost is paid by 97 million households that will each suffer only a small loss. Which individuals in these two groups are more likely to engage in collective action—that is, pay lobbyists, participate in demonstrations, and such—to defend their interests? Back-of-the-envelope calculations can serve to illustrate the answer. Assume the average washing machine costs $600 and has a functional life of 10 years. Its capital cost is thus about $60 per year. If a tariff increases the washing machine’s price by 25%, a household will pay $150 more when buying a new one, or $15 a year.


Few individuals will engage in collective action to save $15 a year for their household. On the other side of the cost–benefit divide, a washer-manufacturing worker earns on average $47,876 a year (in 2016—estimate from IBISWorld). Each employee, perhaps prodded by his boss or peers, has a big incentive to support collective action. The Whirlpool petition includes the signatures of 2,464 employees at its Clyde plant. The lobbying incentive is even stronger for executives with salaries in the six, seven, or eight figures. Now, consider total costs and total benefits. Multiplying the number of households by the annual cost of the tariff for each, we get a total cost of $1.4 billion a year. On the benefit side, multiplying the number of domestic manufacturing workers in washer manufacturing (assuming that half of the workers in the washer and dryer category are on the washer side) by the average salary given above produces an estimate of $57 million in combined earnings for manufacturing workers. If we add to this the profits realized in domestic washer manufacturing plus the salaries of related nonmanufacturing employees, the total of which can be estimated at about $224 million (most of which is corporate profit), we get a total of $281 million in benefits. These benefits of the new tariffs for domestic producers of washing machines amount to only one-fifth of the total cost of $1.4 billion to American consumers. Note that these estimates grossly overstate the benefits of the new tariff. They assume that the whole domestic washer industry would otherwise disappear, that American shareholders would lose all the money invested in this segment of the appliance industry, and that all the employees would lose their jobs and be unable to find new ones elsewhere. In fact, most of the workers would find new jobs. In a dynamic economy, jobs are continuously destroyed and replaced by new ones. A typical example: from March 2016 to March 2017, 12.9 million new private jobs were created and 10.9 million disappeared, for a net creation of 2 million jobs in 12 months. Announcing the new tariffs, President Trump declared that this action will “demonstrate to the world that the United States will not be taken advantage of anymore.” What he should have said is that American consumers will be taken advantage of. Aren’t 97 million American households part of “the United States”? A Whirlpool spokesman declared to the Wall Street Journal that the new protectionist measure was about “providing real benefits to consumers.” This is either a cynical lie or a reflection of crass ignorance. As we repeatedly observe in this whole affair, it is difficult to defend protectionism without defective economics or flawed ethics. FLAWED ETHICS AND OTHER DANGERS

Whirlpool likes to boast of its “social responsibility ” (although I admit that, to its credit, it is not the most politically correct corporation in this regard). “We are proud to be recognized as one of the top U.S. companies for social responsibility,” declared Whirlpool chairman Jeff Fettig in 2009 following the company’s rating

/ Regulation / 11

in the Boston College Center for Corporate Citizenship and Reputation Institute’s Corporate Social Responsibility Index. A Whirlpool press release, apparently oblivious to the irony, stated that one of the criteria used by the index was “good feeling.” There is much good feeling to pass around. Whirlpool also boasts of giving out appliances to new homes for the poor, both in foreign countries and in America. The firm should instead manufacture washing machines competitive enough to not require forcing customs tariffs onto customers. It should stop lobbying the U.S. government to prevent poor foreign workers from selling goods that American consumers want to buy. Doesn’t the company’s social responsibility dictate that it oppose forbidding American consumers to buy washing machines without paying a special tax when they choose an imported one? To repeat, protectionism hurts mainly the consumers of the country where it is imposed. One may forgive Whirlpool for merely taking advantage of a vicious system and recognize that therein lies the weakness of a capitalist system under an interventionist state. But we may hope that the company, out of decency, would at least refrain from peddling gross untruths, incoherent pronouncements, shoddy economics, primitive morality, and hypocritical “social responsibility.” Yet, the real solution would be to abolish the corrupt protectionist system. The solar panel case is broadly similar to the washing machine case. One difference lies in the employment effect: since higher prices (generated by the tariff ) for solar panels and modules will lead to fewer installations, and since many more jobs exist in installation (258,000 workers) than in manufacturing those solar units (2,000 workers), the net effect will be fewer jobs in the sector. Of course, consumer satisfaction, not producer security, should remain the criterion of public policy. Consumption, not labor per se, is what people want. Protected by the new tariffs, domestic manufacturers of washing machines and solar panels will not be incited to become more competitive. They probably have no comparative advantage in these sectors anyway. Thus Whirlpool and solar panel manufacturers are likely to beg for new protectionist measures at the end of the tariff period. Moreover, the measures are already being challenged before the WTO, whose rules allow safeguards but under certain conditions. At any rate, the action signals a protectionist turn that could provoke foreign governments’ retaliation against American exporters and could conceivably spark a trade war. Whether foreign or domestic, competition always “injures” some competitors. But competition benefits consumers. This system is called free enterprise and economic freedom. The negative effect of protectionism is harder for lowerincome consumers, who spend more of their income on goods as opposed to less tradable services (or to savings). Ironically, the “deplorables” who elected Trump will be the first victims of the R new American protectionism.

12 / Regulation / SPRING 2018 H E A LT H & M E D I C I N E

If We Pay Football Players, Why Not Kidney Donors? The risks are lower and the screening process more rigorous for kidney donors.



variety of laws regulate, tax, or prohibit risky activities. A number of these laws are paternalistic in the sense that they seek to protect the willing participants in these activities rather than prevent harm to third parties. Likewise, paternalistic concern for donors’ welfare is a key motivation for stringent regulation of living kidney donation. Although living kidney donation is a common medical procedure and donors usually enjoy a full recovery, the loss of a kidney poses long-term health risks, in particular that of renal failure if the donor’s remaining kidney fails. In the United States and most every other country (with the notable exception of Iran), kidney donation is permitted but financial compensation for donors is prohibited. Not only is there no legal market for kidneys, donors in the United States are often not even reimbursed for their full out-of-pocket cost in making the donation. The ban on compensation may protect potential donors from the temptation of easing their financial situation by giving up a kidney, a choice they may regret in later years. But this regulation has dire consequences. The need for transplantable kidneys is great, far exceeding current availability from deceased and living donations. The official waiting list of Americans with renal failure is now approximately 100,000, with a typical wait time of five years or more. Those on PHILIP J. COOK is professor emeritus of economics and sociology and the Terry Sanford Professor Emeritus of Public Policy at Duke University. KIMBERLY D. KR AWIEC is the Kathrine Robinson Everett Professor of Law at the Duke University School of Law and a senior fellow in the Kenan Institute for Ethics at Duke University. This is condensed from their forthcoming article, “If We Allow Football Players and Boxers To Be Paid for Entertaining the Public, Why Don’t We Allow Kidney Donors To Be Paid for Saving Lives?” Law & Contemporary Problems 81(3).

the waiting list are kept alive by dialysis, which is both costly to taxpayers (because Medicare pays for a large percentage of the costs) and debilitating to the patients. Even with dialysis, thousands of renal-failure patients die each year for want of a suitable kidney. This wait could be largely eliminated by easing the current ban on compensation for donors. An adequate supply of living donors would be especially valuable because living donors tend to provide higher quality kidneys with greater opportunity for developing a close tissue match, thus reducing the chance of rejection. Current estimates suggest that if compensation were permitted, the cost of payments for recruiting an adequate number of donors would be substantially less than the savings from reducing the number of renal patients on dialysis at government expense. In this article we contrast the compensation ban on organ donation with the legal treatment of football and other violent sports in which both acute and chronic injuries to participants are common. While there is some debate about how best to regulate these sports in order to reduce the risks, there appears to be no debate about whether participants should be paid. For the best adult football players, professional contracts worth multiple millions of dollars are the norm. A ban on professionalism in football would be the end of the National Football League, which is currently the highest grossing sports league in the world; the NFL collected $13 billion in revenue in 2016 and each of the 32 teams has a market value of anywhere from $1.6 billion to $4.8 billion. While the recent evidence on the long-term medical damage from concussion has caused widespread concern, there is no prominent voice calling for a ban on professional football. Indeed, a ban is unthinkable in the foreseeable future. That observation helps illustrate the importance of history, custom, and established



interests in shaping the debate over regulating risky activity. But if we could start fresh, the current configuration of activities for which compensation is banned would seem very odd. If ethical concerns persuade thoughtful people that the “right” answer is to ban compensation for kidney donation, then the same logic would suggest that compensation should also be banned for participation in violent sports. If the “right” answer is to permit compensation for participation in violent sports, then compensation for kidney donation should also be permitted. We see no logical basis for the current combination of banning compensation for kidney donors while allowing compensation for football players and boxers. THE RISKS TO PARTICIPANTS

Each year in the United States, 6,000 people donate a kidney,

/ Regulation / 13

voluntarily and without compensation for assuming the medical risks from surgery and living with just one kidney. We compare those risks with the risks stemming from participation in violent sports that do not ban inducements for participation at the highest level. Although the comparison is not perfect, we provide some statistics that suggest that a man who signs a contract to play in the NFL for a year is consenting to be exposed to far greater medical risks than someone who volunteers to donate a kidney. Kidney donation / The immediate risks from surgery can be briefly summarized. A systematic review and meta-analysis of the literature found that there were post-operative complications in 7.3% of cases, which the authors deemed a “low complication rate.” Complications included wound infection (1.6%) and bleeding (1.0%). A questionnaire study of donors three months after their operation found that 18.5% rated their overall health as “somewhat worse” than before, suggesting that over 80% had fully recovered in a subjective sense. The most serious outcome, death, is quite rare. A study of 80,347 donors over the period 1994–2009 determined that there had been 25 deaths, for a rate of 3.1 per 10,000 operations. That is about twice as high as the annual chance of being killed in a motor vehicle accident for the most relevant age group (45–64) during that period. Following recovery, donors typically do not suffer disability related to the loss of their remaining kidney because one functioning kidney does everything required for normal functioning of the body. The longterm mortality risk was no higher for living donors than for age- and comorbidity-matched participants in a large longitudinal health survey (NHANES III). Similarly, an analysis of 3,368 donors age 55 and over found no difference in all-cause mortality in comparison with a matched sample from the Health in Retirement Survey. The only exception to this null conclusion is a study of Norwegian donors that found a divergence in the mortality rates after 10 years, so that by 25 years 18% of the donors had died compared with 13% of the matched controls. A recent review article confirms that there is no difference in death rates for at least the first 10 years, and that the Norwegian study’s conclusion of divergence after that has not been replicated. What about the particular threat that a donor’s remaining kidney will fail, which in the absence of an immediate transplant would mean that the donor will have to go on dialysis? The best study of donors in the United States found a higher cumulative incidence of failure and end stage renal disease (ESRD) for donors than non-donors, 0.31% versus 0.04%. While the risk is significantly elevated for donors, it remains very low in an absolute sense, representing an increased risk of about 1 in 400.

14 / Regulation / SPRING 2018 H E A LT H & M E D I C I N E

Finally, a questionnaire study of 2,455 donors who were between five and 48 years from their surgery found that 84% were satisfied with their lives. The likelihood of satisfaction was enhanced by the donors’ feeling that their gift had positive effects on their relationships. Football / One challenge in making a meaningful comparison between the risks entailed in kidney donation and the risks entailed in participation in contact sports is that the latter may stretch out for many years and involve not one choice (donate or not) but a series of choices regarding participation. The young men who are drafted into the NFL each year have almost all played organized football for a number of years, including in high school and college, and have been exposed to the risk of injury throughout. Various comparisons of football with the single act of donation may be possible, such as “play in one game” or “play for one season.” But given that our focus is on inducements, we take a somewhat different approach and focus on the risks associated with a professional career as the unit of account. Rough physical contact is part of the game of football and injuries are common from an early age. For boys less than 20 years old, football, among all the sports and other types of recreational activities, is the most common cause of injury requiring a trip to the emergency room. An analysis of emergency room visits for 2001–2009 estimated there were 350,000 youths per year treated for football injuries, almost all males. Of these, 25,000 were treated for non-fatal traumatic brain injuries (TBI), typically concussion, of which over half (13,667) were males age 15–19. About 1.5 million males in this age group played organized tackle football in 2009, and if we can assume that most of the injuries affected those rather than youths playing pick-up games, the treated TBI injury rate was close to 1%. The overall rate is probably much higher because most concussions are not treated. An alternative set of national estimates links concussion risk to game exposure for school football teams. The authors’ estimates suggest that over the course of a 10-game playing season, a high school player would have a 1.55% chance of being concussed and a college player a 3.0% chance. These statistics are somewhat out of date and there has been a strong upward trend in reported concussions in organized football—in part because of the national “Heads Up” campaign initiated by the Centers for Disease Control in 2004, increased media attention, and the passage of youth sports concussion laws in all 50 states. These laws specify that young players with possible concussions must be removed from the game and cleared for return by a set protocol. A recent report by Harvard Law School found that in 2016, the 2,274 active players in the NFL experienced 2,066 injuries during the preseason and regular season, in which “injury” is defined as an event recorded by the team trainer that would typically require time lost from practice or game. Of those injuries, 244 were concussions, which works out to 0.073 concussions per player-season. At 7.3%, that is over twice the rate for college players and about

equal to the rate of surgical complications in kidney donation. A recent study of “life after football” brings together the official injury reports and survey information to paint a grim picture. The authors report that 93% of former NFL players missed at least one game as a result of injury and half had three or more major injuries, often requiring surgery. For a substantial majority, injuries ended their career or contributed to the decision to end their career. Nine of 10 former players have nagging aches and pains from football when they wake up, and for most the pain lasts all day. For those age 30–49, the ability to work is impaired by injury. But what has garnered considerable recent attention and concern is the high percentage of former players who have chronic traumatic encephalopathy (CTE) by the time they die. CTE is a progressive neurodegeneration associated with repetitive head trauma, with a variety of symptoms: impulsivity, depression, apathy, anxiety, explosivity, episodic memory loss, and attention and executive function problems. A recent postmortem study of a sample of donated brains of former NFL players found that 110 of 111 indicated either mild or (more commonly) severe CTE. Interviews with family members found that behavior, mood, and cognitive symptoms were common among this group. These findings do not imply that 99% of former NFL players will have CTE. The brains in this study were voluntarily submitted for examination by family members who were often motivated by a desire to know the cause of their loved ones’ dementia or other neurological problems—which is to say, the brains of those who died without such problems may be largely missing from the sample. But the 111 brains do represent 8.5% of the 1,300 former NFL players who died during the period that these brains were donated. That places something of a logical lower bound on the prevalence of CTE. Presumably the true prevalence is much higher than 8.5%. The other problem with these remarkable findings is that they do not provide a direct indication of the cause or causes of the CTE and associated disabilities. Repetitive head trauma is recognized as a necessary but not sufficient condition for CTE. The subjects had been exposed to repetitive head trauma throughout their careers as football players, which typically would have started in high school or well before. In fact, there is some evidence that age at first exposure to football may be related to the likelihood of impaired cognitive performance by former football players. Elite players who choose to go professional following college likely increase their chances of neurological problems in later life, which are already high as a result of their exposure up to that point. Unfortunately, the science does not provide a basis for sorting out the additional contribution of an NFL career to this health burden. While it is not possible to do a precise “apples to apples” comparison of the medical risks associated with kidney donation and the risks associated with a professional football career, it seems clear that the acute risk of injury and of long-term disability are far higher for the football player. As discussed above, most NFL veterans live out their lives following retirement with serious physical and mental disabilities. The vast majority of kidney donors lead


entirely normal lives following recovery from the initial operation. THE LIMITS OF CONSENT

Ordinarily, people are born with two kidneys but they only need one to sustain full health. For that reason, adults can donate a kidney and, after recovering from the operation, expect their life span and health will not be much affected. Still, as explained above, there are risks entailed in the operation, and the loss of redundancy in kidney function may cause medical problems in later life if a donor is unlucky enough to suffer kidney failure. Concern for the potential kidney donor’s welfare motivates a variety of restrictions on donation, including a ban on financial compensation. This ban is paternalistic: it deprives donors of compensation in part because the allure of a financial payoff may cause some people to choose to donate against what might be considered, given the risks, their “true” best interests. Is that restriction justified? Whether and when sane, sober, wellinformed adults should be banned by government authority from choosing to engage in an activity that risks their own life and limb is an ancient point of contention. There are a variety of hazardous activities that are permitted with no legal bar to receiving compensation. Included on this list are such occupations as logging, roofing, commercial fishing, and military service. Also included are violent sports such as football, boxing, and mixed martial arts. These examples illustrate a broad endorsement of the principle that consenting adults should be allowed to exchange (in a probabilistic sense) their physical health and safety for financial compensation, even in some instances in which the ultimate product is simply entertainment. The Harm Principle and external effects / In the search for a princi-

pled basis for setting legal boundaries on self-hazardous choices, a natural starting point is the tenet that adult choices that do not hurt others should be allowed by government. This Harm Principle was developed by John Stuart Mill in his classic treatise On Liberty (1859). It provides a rationale for the view that adults in the possession of their faculties should be free to choose to engage in risky activities if that choice does not harm others who are not part of the bargain. In this view, paternalistic regulations—those imposed for the individual’s own good—should be limited to restrictions on children or on adults who are not in a position to make free and well-informed choices. While the Harm Principle appears to create a broad scope for individual autonomy, governments limit autonomy if negative external effects are considered problematic. Most individuals are enmeshed in a web of sentiment and responsibility to family members, neighbors, coworkers, and others. Thus, a risky choice that results in injury or death will tend to have harmful consequences for other people, including those who had no direct authority or influence over that choice. Furthermore, third-party effects are created by participation in private and government insurance programs and eligibility for safety-net programs in which any financial costs (for medical care, for example) are broadly shared.

/ Regulation / 15

In the case of living kidney donation, the direct external effects include considerable surplus of benefit over cost. Enhancing the quality and quantity of kidneys available for transplantation would reduce disability and save lives among patients while also saving the cost (to taxpayers) of maintaining these patients on dialysis. Hence for kidney donation—unlike, say, dueling or boxing (or a great variety of other risky activities)—it appears that the external effects are far more positive than negative. Cognitive biases and limitations / The belief that adults are able to discern and act on their true interests when faced with complex choices is basic to Mill’s argument for freedom from government interference. During the last half-century, economists and behavioral scientists have explored the limitations and biases in decisionmaking, demonstrating that even sane and sober adults tend to make systematic errors. When the stakes are high, as they are in choosing to donate a kidney or play professional football, even a free-choice advocate may accept that some limits are warranted. Here we very briefly consider the relevant issues and conclude that if the National Organ Transplant Act of 1984 (NOTA) were amended to allow payments to donors, potential kidney donors could be protected against being unduly tempted through the existing structure of screening, counseling, and delay. In contrast, it is not clear that NFL recruits have similar protections in place. In the ideal, a rational person faced with an important decision (donate a kidney, sign a contract to play professional football) would want to proceed as a decision analyst would instruct. The goal is to combine the objective consequences of the option with the individual’s subjective valuation of those consequences, including timing (now versus later) and likelihood. This rational person might go about making her decision using the following the exercise:

List all possible consequences over one’s lifetime. ■■ Estimate the probability of each consequence. ■■ Assess the utility gain or loss of each consequence according to the decision-maker’s own preferences. ■■ Calculate whether the expected value in terms of utility gains and losses is positive. ■■

Needless to say, that is not how such decisions are made in practice, although in the case of kidney donation (and not football) much of the relevant information will at least be provided as part of the counseling required of potential donors. The difficulty of making an informed decision is greater because the decider can only go down that path once. The issue is actually not whether individuals should be trusted to act like well-informed decision analysts, but rather whether they could benefit from legal restrictions on the menu of possibilities available to them. This challenge has become better focused as research in behavioral science has documented the tendency of adults to make systematic errors in their decisions. Much of this research has focused on choices that have uncertain outcomes, or outcomes that are distributed over time, or require

16 / Regulation / SPRING 2018 H E A LT H & M E D I C I N E

the decisionmaker to predict her sense of well-being under the scenarios implied by the available choices. For example, people tend to discount the value of delayed consequences according to how far in the future they would be experienced and can make sensible choices between prospects that offer a payoff in one year or a larger payoff in two years. However, prospects with immediate payoffs are often tempting out of proportion to their objective value and induce impulsive choices that are later regretted. It is helpful to deconstruct the decision to donate a kidney under both the current regime (no compensation) and a hypothetical regime (in which the donor would be financially compensated). Living donation is an arduous process that would not be undertaken by a well-informed person without a substantial reward of some sort (whether monetary or emotional). Under the current regime, only about 6,000 living donors volunteer each year. Almost all of them specify who is to receive their kidney, and as a consequence the donor has the satisfaction of saving the life of a family member or friend, and presumably enjoys the recipient’s gratitude as well. Potential donors undergo screening, both medical and psychological. While donors do not have to pay the expense of the screening and operation, they may have lost earnings at the time that are not reimbursed. If they experience medical consequences years later, no financial help will be forthcoming from the beneficiaries of their gift or the kidney-donation system. Everything about this process leans against making an impulsive decision to donate. Indeed, those who choose to become a donor may typically see it as an obligation rather than an opportunity. They may be under pressure from family members or may not see any acceptable alternative to the unpleasant prospect of donating. There is nothing of the “temptation” in this scenario, given the delays, the counseling, and the fact that much of the pain and risk precede the usually rewarding event of donation. If the system for screening potential donors were preserved, but now with the possibility of compensation (for the sake of argument, say, worth $50,000) then many more donors would come forward, especially for non-directed donations. For the additional donors, the payment would be a stronger incentive than the psychic rewards of a pure altruistic act. (In fact, in this regime some would-be family donors may decide to refrain, given the knowledge that other suitable kidneys are available.) The increase in donations would save many lives and reduce costs to taxpayers. But the question remains of whether the promise of payment would tend to encourage donations that are not in the donors’ true interest as a decision analyst would define that interest. For the potential donor, the prospect of financial reward may overcome concerns about the temporary pain and disability, the slight risk of death stemming from the operation, as well as the small probability of medical problems years or decades later. There is nothing intrinsically irrational about a willingness to assume medical risk in exchange for a substantial amount of money. But the quality of the choice may be influenced by the sequence of events. If donors were offered a $50,000 check on the day that they volunteered to

donate, but did not have to actually go on the operating table for a year, impulsive, ill-considered donations might be the norm. But the disproportionate temptation of an immediate payoff could be managed if the payment were not made until after the operation, which in the normal course of events would take weeks or even months while the donor underwent screening and matching. The delayed payoff would have the effect of protecting potential donors against impulsive decisions while respecting their underlying preferences for the value of the money vis-à-vis the medical risks of donation. The delay is in the spirit of the “nudge” approach to policy design popularized by Richard Thaler and Cass Sunstein. It is in contrast to a paternalistic approach that denies the validity of the donor’s preferences. A recent survey, for example, found a sizable group that thought it was unacceptable to offer potential subjects in a risky medical experiment compensation of as much as $10,000. The authors speculated that these respondents thought that a large payoff would induce people to participate who placed “too much” value on money (or too little on their health). These respondents were in effect privileging their own values over those of others. The same concerns that apply to the quality of kidney donor decisions also apply to the decision to sign a contract to play in the NFL. Players are given little information about the risks. The longer-term risks (including the risk of CTE in middle age) have not been well quantified but appear to be far higher than for kidney donation. The payoff in both financial terms and status is also very high and immediate. Any counseling or screening that might occur is up to the player to pursue. Exploitation, coercion, race, and class / Living kidney donors in the United States have above-average incomes (after adjusting for sex and age), perhaps as one reflection of the financial losses experienced by donors. In a new regime in which donors were paid a substantial fee, it is predictable that the influx of volunteers would have below-average incomes. The prospect of financially stressed individuals attempting to make ends meet by “selling” a kidney raises a red flag for some ethicists. A compensation regime would expand the choice set for those in comfortable circumstances, but those in desperate circumstances might feel compelled to sell a kidney; in that sense, the option of selling could be seen as “coercive.” Furthermore, a system that in part depended on the poor to supply kidneys could be seen as “exploiting” the poor. This line of thought is represented in a 2001 report of the National Bioethics Advisory Commission about paid participation in medical experiments:

Benefits threaten … the voluntary nature of the choice, … raise the danger that the potential participant’s distributional disadvantage could be exploited [and] … lead some prospective participants to enroll … when it might be against their better judgment and when otherwise they would not do so.

We believe that using words like “coercion” and “exploitation” to characterize the introduction of a new option by which poor


people (and others) could earn a substantial amount of money provides more heat than light to this situation. Just because living donors would have lower incomes than current donors does not support a ban on compensation, which in fact limits the options available to the poor and thereby makes a bad situation (their lack of marketable assets) worse. But for anyone not persuaded by this argument, we note that these social justice concerns apply with at least equal force to compensating boxers; most American professional boxers were raised in lower-income neighborhoods and are either black or Hispanic. As more has become known about the dangers of repeated head trauma, similar arguments regarding football have become more prominent. About 70% of NFL players are black, and Pacific Islanders are also overrepresented as compared to the American population. Accordingly, much attention has been paid to the concussion crisis as a race and class problem. As one observer recently noted, “What’s a little permanent brain damage when you’re facing a life of debilitating poverty?” In reality, however, NFL players are better educated themselves, and come from better educated homes, than is average for Americans, in part because the NFL typically recruits college students. Still, some NFL players, like some would-be kidney donors, come from poverty. CONCLUSION

Our claim is that there is a stronger case for compensating kidney donors than for compensating participants in violent sports. If this proposition is accepted, one implication is that there are only three logically consistent positions: allow compensation for both kidney donation and for violent sports; allow compensation for kidney donation but not for violent sports; or allow compensation for neither. Our current law and practice is perverse in endorsing a fourth regime: allowing compensation for violent sports but not kidney donation. As to social justice concerns, we offer both a direct response and a response by analogy with violent sport. A fundamental norm of our culture and legal tradition is to respect the choices of (sane, sober, well informed, adult) individuals. That norm serves to limit government interference with private choices. It is supported by the right to liberty from undue government interference. A well-developed organ procurement process in the American system seeks to ensure that potential donors are fully capable of making a good decision. Potential kidney donors are not only provided with full information, but also screened for mental and physical disability. While there is the possibility of “mistakes” (a decision to donate against the true best interests of the individual) under a compensated system, the screening, consent process, and delays should minimize the chance for the kind of errors that behavioral economics has demonstrated are common. Under such circumstances, the opportunity to be paid for donating a kidney is not exploitative or coercive, but rather welfare-enhancing. We also argue by analogy with professional football, boxing, and other legal but violent sports. The medical risks to a profes-

/ Regulation / 17

sional career in these sports are much greater both in the near and long term than the risks of donating a kidney. On the other hand, the consent and screening process in professional sports is not as developed as in kidney donation. The social justice concerns stem from the fact that most players are black and some come from impoverished backgrounds. In sum, the arguments against compensating kidney donors apply with equal or greater force to compensating athletes in these sports. Note that these arguments focus on the donors’ welfare and ignore the welfare of people in need of a kidney. A comprehensive evaluation of amending NOTA to allow compensation requires that both groups be considered. Such an evaluation, conducted by P.J. Held and colleagues, reached the following conclusion about a regime in which living donors were offered enough compensation ($45,000) to end the kidney shortage: From the viewpoint of society, the net benefit from saving thousands of lives each year and reducing the suffering of 100,000 more receiving dialysis would be about $46 billion per year, with the benefits exceeding the costs by a factor of 3. In addition, it would save taxpayers about $12 billion each year.

The present value of this flow of social benefits would exceed $1.3 trillion. As far as we know, there has been no cost–benefit analysis of the analogous reform in football, namely to ban professional compensation. But a first cut is the market value of NFL teams because that value reflects the present value of future ticket sales and broadcast payments, net of costs, under the current legal regime. Presumably a ban on compensation would end professional football and drive the value of the 32 current teams to zero. That current value, according to Forbes, is about $56 billion. That amount should be modified to take account of subsidies by host cities, and in the other direction to take account of consumer surplus, but regardless it is clear that the monetized value of allowing compensation for professional football players is far less R than for allowing compensation for kidney donors. READINGS ■■ “A Cost–Benefit Analysis of Government Compensation of Kidney Donors,” by Philip J. Held, Frank McCormick, Akinlolu O. Ojo, and J.P. Roberts. American Journal of Transplantation 2016(16): 877–885 (2016). ■■ “A Primer on Kidney Transplantation: Anatomy of the Shortage,” by Philip J. Cook and Kimberly D. Krawiec. Law & Contemporary Problems 77(3): 1–24 (2014). ■■ “Clinicopathological Evaluation of Chronic Traumatic Encephalopathy in Players of American Football,” by Jesse Mez, Daniel H. Daneshvar, Patrick T. Kiernan, et al. Journal of the American Medical Association 318(4): 360–370 (2017). ■■ “Designing a Compensated-Kidney Donation System,” by T. Randolph Beard and Jim Leitzel. Law & Contemporary Problems 77(3): 253–288 (2014). ■■ Is There Life after Football? Surviving the NFL, by James A. Holstein, Richard S. Jones, and George E. Koonce Jr. New York University Press, 2015. ■■ “More Money, More Problems? Can High Pay Be Coercive and Repugnant?” by Sandro Ambuehl, Muriel Niederle, and Alvin E. Roth. American Economic Review 105(5): 357–360 (May 2015).

18 / Regulation / SPRING 2018 T R A N S P O R TAT I O N

Who Should Pay For Infrastructure? The most daunting impediment to efficient financing is public misperception.



ost popular discussion of infrastructure spending amounts to little more than a plea for someone else to pay the bills. Although there are some reasons for higher-level governments to provide some local infrastructure projects, in the end the bill must be paid either by user charges or by taxing someone. In such cases, it is preferable for users to pay whenever that is feasible. But governments seldom do this, instead relying on taxes. In this article, we draw heavily on our recent book, Financing Infrastructure: Who Should Pay (McGill–Queens University Press, 2017), to discuss why users should pay, why they seldom do, and how we may do better on this in the future. WHY USERS SHOULD PAY

Consumers should pay directly for many services furnished by the public sector, particularly such congestible services as roads or water and sewerage provided to easily identifiable users. Charges are especially desirable for congestible infrastructure because they both signal where new investment is needed and provide funding for it. User charges are superior to taxes for three reasons. First, charges do not distort behavior, whereas taxes do. Under a system of charges, payments are based on use of the services. One pays for what one receives. In contrast, taxes always are based on something else—sales, income, or property values—rather than the quantity of services consumed. One can alter one’s taxes by changing behavior to alter one’s tax obligation rather than any of the services that the taxes fund. Economists use the term “distorRICHARD M. BIRD is a senior fellow and ENID SLACK is the director of the Institute on Municipal Finance and Governance, Munk School of Global Affairs, University of Toronto.



tion” or “deadweight loss” to refer to the loss of sales, income, or property value that results from people changing their behavior in response to taxation. The second reason that charges are superior is that they send correct signals to consumers about the true costs of the services. When user charges for services fully cover the marginal social cost of providing them, people buy such services only up to the point at which the value they receive from the last unit they consume is just equal to the price they pay, so that resources are efficiently allocated. Moreover, providers that are financed by full-cost pricing have an incentive to adopt the most efficient and effective ways for providing the service and to supply it only up to the level and quality that people are willing to pay for. The final reason charges are superior is because they allow political decisionmakers to assess more readily the performance of service managers—and citizens to do the same with respect to the performance of politicians.

/ Regulation / 19

/ Despite the benefits of user charges, public infrastructure is often funded in whole or in part by taxes. Advocates of this financing offer two economic justifications for this: such infrastructure can better achieve economies of scale through taxpayer-funded large projects, and infrastructure is subject to market failures that necessitate taxes for efficient provision. For example, public water supply is often considered to be a “natural monopoly” because the average (and marginal) cost of supplying a unit of water declines as output increases. As a result, pricing water at marginal cost would result in an unsustainable deficit, which would discourage the undertaking of water projects. However, according to a 2008 paper by Celine Nauges and Caroline van den Berg, relatively few water systems seem to operate in the decreasing-cost range. When users do not pay the full costs, taxpayers must make up the difference through taxes that distort behavior and impose deadweight losses. Moreover, because pricing services below cost artificially inflates the demand for more infrastructure, the total distortionary effect of such tax finance tends to increase over time. Although economies of scale may sometimes be important, they never tell the whole story when it comes to who should pay, how much, and when. Externalities may at times raise more complex issues. However, according to a 2009 paper by Ian Parry and Kenneth Small, the external benefits associated with infrastructure investments are often highly casespecific and difficult to measure. It is seldom obvious who, if not users, should pay how much for them. Why are these services usually funded by taxes?

Efficient pricing / In principle, all costs relating to the services provided to users, including those related to investment (amortization, interest), should be covered. If all inputs are secured from competitive markets (that is, correctly priced in economic terms), then full-cost pricing will send the right market signals to users and managers. This financing also will provide enough money to furnish the service at the economically correct level—that is, the level at which the benefits to society are at

20 / Regulation / SPRING 2018 T R A N S P O R TAT I O N

least equal to the social costs of providing the service—without additional budgetary support. User-charge financing is the best way we know to ensure that those responsible for providing public services are not only adequately financed but also encouraged to do so in the economically most efficient way possible. In effect, a public provider financed by full-cost pricing is like a business enterprise in a perfectly competitive market, whether the provision of services is organized and run by a government department, an independent agency, or by a separate public utility enterprise (or, indeed, a properly regulated private company). For such pricing to do the job properly, however, three important conditions must be satisfied. First, because good user charges should match the specific costs and benefits associated with services received by each individual user, considerable institutional, administrative, and legal preparation as well as substantial (and accurate) accounting information are required to design and implement a good system of user charges (as Bernard Dafflon describes for Switzerland in a 2017 paper). Few if any jurisdictions in North America come close to meeting this condition. Second, people need to understand and accept the case for charging properly for public services—something that is now demonstrably not the case for the most part. Third, given the technical and political costs of designing and implementing an economically sound charging system, it is worth the effort of doing so only when it really matters—that is, when such a system provides a net social benefit. Major expansions of public infrastructure investment would seem to be an instance where the potential benefits of pricing right are worth the costs of doing so. Financing long-lived investment in infrastructure by borrowing is often sensible. However, borrowing—like public–private partnership arrangements—does not provide “free money.” Loans must be repaid (and private partners rewarded) and the only way to do so is through user charges or taxes. Politicians, whose horizons are often relatively short, understandably prefer to shift costs to the future (or to another level of government). Harried local taxpayers are usually equally willing to put off to tomorrow (or, even better, to someone else) the pain of paying taxes for debt service. Borrowing—whether direct public borrowing or through (usually more costly) private partners—may be the best way to shift costs forward to the next generation to the extent that benefits from the project are estimated to flow to that generation. Borrowing may sometimes also make sense to “smooth” tax increases over time to match the expected benefit flow from the financed project. But it never lets one dodge the real question: who should pay? WHY USERS SELDOM PAY THE RIGHT PRICES

Telling the truth about what needs to be done—relying on evidence, as academics like to say—is desirable in principle. But it is not easy to tell complex truths in ways that persuade people who are not ready to hear them. Correcting false beliefs is difficult.

It takes careful planning, hard and persistent effort, and good leadership to persuade people that something they believe—for example, that user charges are often unfair, regressive, and just another name for taxes—is wrong. Advocates of more and better user pricing in the public sector face a tough audience. Economic arguments about scale and externalities are often used against pricing. Such arguments seldom tell the most important story and are sometimes little more than assertions used to support a predetermined conclusion. Similarly, while there are some difficult technical problems in designing and implementing pricing schemes, many such problems are becoming easier to resolve. When, for instance, poor farmers in Africa can buy and sell on their GPS-equipped mobile phones, the scope for effectively pricing (say) road use is clearly much greater—especially in more developed countries—than it was even 10 years ago. The main obstacles to more sensible pricing are now seldom economic or technical; they are political. Some opponents want to obfuscate what is going on, some have distributional concerns that they think justify underpricing public services, and some seem to think that services that are “public” enough in nature to be provided by the public sector should be provided freely to all. For example, suggesting that transit fares should be higher during rush hours when congestion is higher is often viewed as being morally wrong, the equivalent of raising food prices at times of famine. In economic terms, however—that is, to ensure the best possible use of scarce resources—such proposals are right. Failing to vary prices to encourage more even usage of facilities over time inevitably increases the pressure to build still more public infrastructure to accommodate peak-demand increases. Many jurisdictions now have the data available to price more correctly, which often means in a more time-sensitive way, if they want to do so. Airline pricing, for example, is already as complex and variable as anything that even the most finicky public pricing designer is likely to come up with. Firms like Amazon vary prices on a wide variety of items by the minute and no one seems to think anything of it. So far, however, governments have made little effort to join the ongoing pricing parade. The reason is not inadequate economic understanding or lack of technical competence. Usually we do not price correctly because those in charge do not want to do so, or do not think they can sell the public on pricing, or perhaps both. Most people think, often correctly, that they have no choice but to pay high congestion prices. To keep their jobs, they must travel at peak hours. Yet economists argue that if correct prices were charged, over time employers would adjust working hours or pay to keep staff content. So in the end, on average, people would be better off. The economists might be right. However, in the real world where most people aren’t “average” in terms of where they live, where they work, the skills they have, their contacts, and so on, the lives of many of them may be drastically changed for the worse for years while such adjustments take place. People do not


like change and they certainly do not welcome change imposed on them by others. We might all be better off if road use and transit were sensibly priced. But to get there from here, some of us would have to change (and perhaps even lose) our jobs, our houses, and our way of life, perhaps forever. Visions of a perhaps slightly better earthly paradise in the future are seldom persuasive to those who see such costs looming before them. Economists have tended to pay too little attention to such issues. One reason may be because they often perceive such

/ Regulation / 21

for transfers from higher-level governments than to deal with outraged neighbors. HOW TO DO BETTER

Economists often assert that, as Philip Bazel and Jack Mintz state in a 2015 paper, “a user-pay model would work to eliminate political influence, create revenue for infrastructure renewal, and facilitate an optimal allocation of infrastructure resources.” They are right in principle, but no one seems to be listening. As with free trade, what economists say is not very persuasive to people who see some policy as adverse to their immediate direct interests. They doubt that it will be sufficiently beneficial in the long run to make the pain of adjustment worth bearing. Faced with such resistance, economists often respond that better data, more transparent processes, a simpler and more understandable pricing system, and more education will, over time, lead more of the public to see the light. Desirable as more transparency and education are, however, on their own they are unlikely to offset the evident distrust many people feel with respect to direct charges for public services. To sell user charges, much more attention needs to be paid to the “transitional” issues that affect people’s lives in a salient fashion and appear to shape their reaction to proposed changes.

Given the resistance that user-charge proposals frequently generate, whether reasonable or not, it is not surprising that politicians generally prefer to avoid imposing charges.

problems as a transitional issue that in principle may be dealt with by providing an offsetting distributional transfer in some compensatory way. In practice, however, no government at any level anywhere has done much to provide such offsets directly to those affected. And even if a distributional offset were to be provided, many people would still be unhappy because how people view change depends not simply on the nature and size of the change but also on who decided it and how. Were they consulted? Were their views visibly considered? Is the proposal consistent with their view of the world? Some may think, for example, that public services should be provided free for all either as a matter of right or because they have already been paid for by general taxation. Or they may think that it is not fair to charge them for new infrastructure when in the past others seemingly got similar services for free. Or they may simply not believe that governments will (or perhaps can) deliver the promised benefits or that government should be in the business for which they, the people, are being asked to pay. Given the resistance that user-charge proposals frequently generate, whether reasonable or not, it is not surprising that politicians generally prefer to avoid imposing charges. And even if they do so, the user-charge system they end up putting in place is often so hobbled and complex—with cross-subsidies here, special concessions there, and a complex financing structure that shifts costs outside the circle of direct beneficiaries (e.g., to the future)—that it is never quite clear who is paying how much for what. Local politicians—who must literally live with their constituents, employees, and suppliers—may be especially tempted to charge too little and in the wrong way. It is much easier to ask

Preferring the status quo / People assess possible changes against their perception of present reality. Reactions to change are often anchored to the status quo. As everyone in the budgetary game knows, for example, what matters is often less whether what is proposed is “right” in some conceptual sense than precisely how and in what way it will change whatever it is we are now doing. User charges have a big hurdle to jump in this respect, especially if they are charged for something that people now perceive to be free. Nothing is free when it comes to using scarce resources, of course. But no one now has to pay out of pocket for pulling out of the driveway onto a city street, let alone pay more for doing so in a congested downtown area or at peak hour. Persuading people they should pay in money as well as time for the privilege of being stuck in rush hour traffic is not an easy sell. Most people think that the cost of using their automobile is only what they pay for fuel and any parking charges. Few account for the much larger private costs of operating and maintaining a vehicle, let alone providing home storage for it and the more esoteric opportunity costs of commuting time. And perhaps only the odd economist even thinks of the additional costs that one’s commute imposes on everyone else. Even when charges already exist, as they usually do for parking and water, it is often as difficult to raise those charges to reflect true costs or alter them for congestion as it is to launch a new

22 / Regulation / SPRING 2018 T R A N S P O R TAT I O N

charging system. Public pricing systems are sticky in the sense that prices tend to stay where they are first set. Moreover, public services are often priced like postage stamps, with everybody in the jurisdiction paying the same price regardless of how much it costs to provide each individual the service in question. Because people tend to anchor to the status quo and changes are difficult to make, it is best to get the pricing system right in the first place. Unfortunately, governments almost never face a clean slate even with respect to brand-new infrastructure projects because the services such projects provide have their own pricing (or nonpricing) history. Moreover, sometimes people may frame proposed changes against some conception of a past “golden age” (or an equally idealized future) in which government services are free for all and somehow magically supplied without anyone explicitly paying for them. Folk justice / Another important policy concern is the perceived unfairness of most user charges. Some may think that charging for services is just another way for government to take away their hard-earned money. In a 2013 book, Steven Sheffrin suggests that the relevant frame within which many people think about such matters is closer to what he calls “folk justice” than to the effect of policy change on income distribution that is usually the focus of technical analysts. A critical aspect of folk justice is the extent to which people believe that their voices have been heard and respectfully considered in developing and implementing any proposed policy change. Policymakers and analysts who wish to change pricing policies need to focus more on what really shapes people’s views about prospective changes than on how consistent the results may be with utilitarian, Rawlsian, or other philosophical equity constructs. Simply asserting (or even demonstrating) that a given change will make people in aggregate better off in terms of some abstract index of welfare is not persuasive to those who do not accept (or understand) the standard of comparison. For changes to be accepted in a democratic system, enough people to constitute a supportive coalition must come to believe that the change will make them visibly better off in terms of their own values and beliefs. Reaching this goal is not easy. Few people seem bothered by the fact that one store charges more for bread than another, but many appear to think that water should be the same price for everyone. Even if people accept not only that the real costs of providing services to different people are different but also that people may choose to have somewhat different levels of service or different degrees of access, when it comes to changing public prices the discussion often focuses on the distributional effects. The real reason for opposition may

be different—workers may fear losing their jobs or homeowners may fear the value of their houses will fall—and the net result of any distributive effect on inequality or poverty may be miniscule. But demonstrating concern for, and providing solutions to, the perceived regressive effects of charging for public services may often be a necessary condition for successful reform. Opposition to charges that originates from such concerns can be difficult to counter. Making payments more convenient, measuring costs and benefits carefully, making people aware of them, and—in cases where the distributive effect is sufficiently significant to warrant explicit attention—providing adequate compensatory offsets through direct transfers (e.g., adjustments in welfare payments and income-related tax credits) are familiar responses. Sometimes such measures—to which few governments have paid sufficient attention in practice—may do the job. Still, getting a majority on board may be difficult. People tend to focus on clear and understandable truths: we all need water and to get to work

For changes to be acceptable in a democratic system, enough people to constitute a supportive coalition must come to believe that the change will make them visibly better off.

on time; the public sector is supposed to serve all the public, not just those who can pay; and raising the direct cost of accessing any public service places a larger relative burden on the poor. They find it more difficult to understand that subsidizing services provides the greatest benefits to those who use the most—who are seldom the poor—and that underpricing encourages more use, leads to increasing demands for still more service, and is a waste of scarce public resources. Separating (unbundling) the financing issue as clearly as possible from the basic provision issue by subsidizing directly those in need who are adversely affected may help. Water, for example, may be considered by many to be a “social” rather than an economic good. Nonetheless, it is critical to separate the structuring and financing of such “social” characteristics as universal access and distributive and health concerns from the basic costs of setting up and running a good water system. Only when subsidies (whether for distributional or other reasons) are clearly distinguished from questions of basic financing can the provision of public utility services—including investment in infrastructure—be made transparently financially sustainable while still providing the right incentives to users, utility, and government. When subsidies are


needed, they should go directly to those targeted—that is, specific consumers—and not be offered to suppliers or to all consumers as is commonly (and usually ineffectively and inefficiently) done. Such things are easy for economists to say and have often been said. But they are seldom easy to estimate precisely. Even when good estimates can be made—in a policy context that seems increasingly to be influenced more by the instantaneous, strong, and simplistic opinions of the many than by the best reasoned (and hence generally complex and nuanced) conclusions of experts—those who believe they already know the answer are unlikely to be swayed. The line between nudging people to do the right thing in their own interest and Machiavellian maneuvers intended to get them to go along with what someone else has decided is good for them (or perhaps simply good for the “nudger”) is sometimes thin. To influence behavior (or sell ideas) requires governments to spend more time and effort understanding what their citizens want than most governments can or want to do. Simply being fully transparent and open to public scrutiny is unlikely to be good enough. As studies of tax reform have shown, the line between idea and implementation is seldom short or straight. Gathering the evidence (preferably from credible independent research), getting the problem placed on the public agenda, devising solutions for the problems seen as relevant by those affected, then waiting until the time is ripe for reform—which may require a crisis—and then mustering sufficiently strong coalition support to get a change through, and finally sequencing and bundling implementation so that it becomes a reinforcing rather than conflict-causing process usually takes a lot of time and effort. Sound reforms can seldom be accomplished quickly or easily. CONCLUSION

The best chance to make better use of user charges is probably when, as now, new infrastructure investment moves close to the top of the political agenda. Only when something new is on the horizon can people perhaps see that they are being asked to enter into a contract in the form of agreeing to pay for some new benefits that they can credibly see coming down the road. So long as charging more means asking people to pay more for what they already get (or perhaps for services that are deteriorating as more users crowd in), the prospects for success are slim. Unless people think it is necessary for their own welfare to pay more for a service they want and need, they are unlikely to support radical changes in the status quo. Those proposing changes need to tell a sufficiently strong and convincing story that resonates with people’s values, ideas, and interests. Most people think they already pay too much to government. If they are to pay more, they need to be convinced that they gain from doing so. They need to believe that they are paying for something that they not only need but want. Not only should all revenues from user charges go explicitly and strictly to providing the designated services, but such payments should be transparently

/ Regulation / 23

separated from any other payments to government agencies such as property taxes or such other charges as water and sewerage bills. Moreover, prolonged, detailed, credible, and patient interaction between policymakers and those who are expected to pay is essential. Policy advocates must visibly and adequately respond to at least the more thoughtful criticisms they receive, and politicians must be prepared to carry the ball in public. None of this is easy. But the effort may be worthwhile when the stakes are as high as they now are when it comes to how best to finance the substantial infrastructure investments being discussed and planned. The user charge system emerging from the invariably long and usually contentious political process just described may be far from any theoretical optimum. But even user charges that are at best halfway to perfection—for example, simply more rational parking charges and enforcement on city streets instead of optimal congestion tolls—will usually be a better and more sustainable way to finance new urban infrastructure than funds obtained either from on high (federal grants) or from such seemingly free revenue sources as public–private partnerships (PPP). In the end, users or taxpayers must always pay, and user charges (whether channeled through PPPs, governments, or utilities) are, whenever feasible, the best way to pay for new infrastructure. Some costs may be recouped from non-residents and future residents, and intergovernmental transfers and borrowing (directly or via PPPs) may be appropriate financing tools to cover such costs. But at the end of the day, the most efficient and arguably the fairest way to maintain, renew, and expand public infrastructure R is simply to charge users the right prices. READINGS ■■ Charging for Public Services: A New Look at an Old Idea, by Richard M. Bird. Canadian Tax Foundation, 1976. ■■ “Financing Environmental Infrastructures through Tariffs: The Polluter/UserPays Principle, Swiss Way,” by Bernard Dafflon. In Financing Infrastructure: Who Should Pay, edited by Richard M. Bird and Enid Slack. McGill–Queens University Press, 2017. ■■ “Local Environmental User Charges in Switzerland: Implementation and Performance,” by Bernard Dafflon and Sandra Daguet. EuroEconomica 5(31): 75–87 (2012). ■■ “Optimal Public Infrastructure: Some Guideposts to Ensure We Don’t Overspend,” by Philip Bazel and Jack Mintz. SPP Research Papers 8(37) (November 2015). ■■ “Parking Taxes as a Second-Best Congestion Pricing Mechanism,” by Sebastian Miller and Riley Wilson. Inter-American Development Bank IDB –WP-614, October 2015. ■■ “Should Urban Transit Subsidies Be Reduced?” by Ian W.H. Parry and Kenneth A. Small. American Economic Review 99(3): 700–724 (2009). ■■ “Spatial Heterogeneity in the Cost Structure of Water and Sanitation Services: A Cross-Country Comparison of Conditions for Scale Economies,” by Celine Nauges and Caroline van den Berg. Working paper, 2008. Available at www.researchgate. net/profile/Caroline_Berg/ ■■

Tax Fairness and Folk Justice, by Steven M. Sheffrin. Cambridge University Press, 2013.

■■ “Who Pays, Who Benefits, Who Decides? Urban Infrastructure in Nineteenth-

Century Chicago and Twentieth-Century Phoenix,” by Carol E. Heim. Social Science History 39(3): 453–482 (2015).

24 / Regulation / SPRING 2018 ANTITRUST

The Return of Antitrust? New arguments that American industries are harmfully concentrated are as dubious as last century’s pre-Chicago claims.



n a July 24, 2017 New York Times op-ed, Senate Minority Leader Chuck Schumer (D–NY) promised aggressive antitrust activism as part of his party’s “Better Deal for American Workers.” “We are going to fight to allow regulators to break up big companies if they’re hurting consumers,” Schumer promised. Antitrust laws, he argued, are “padding the pockets of investors but sending costs skyrocketing for everything from cable bills and airline tickets to food and health care.” As Jeff Stein at Vox explained, this Better Deal intends to create “a new federal ‘Trust Buster’ agency … similar in scope to the Consumer Financial Protection Bureau.” In a 4,000-word column titled “Is Amazon Getting Too Big?” in the Washington Post a few days later, business writer Steven Pearlstein went Schumer one better, arguing that a new “antitrust czar” should not focus narrowly on consumer harm, but should combat “bigness” in general. Pearlstein lauded a Yale Law Journal article by Lina Khan, now a fellow with the Open Markets Institute, that makes the same argument. Pearlstein was followed by numerous columns and articles containing similar points with increasing frequency and intensity. In a January 16, 2018 Wall Street Journal piece titled “The Antitrust Case against Facebook, Google and Amazon,” economics commentator Greg Ip claimed, “A growing number of (nameless) critics think these tech giants need to be broken up or regulated as Standard Oil and AT&T once were.” That was followed by The Economist’s January 20 cover story, “The New Titans: And How to Tame Them,” with Facebook, Google, and Amazon depicted as gigantic scary robots. Just a few days earlier, Brookings Institution political scientist William Galston and assistant Clara Hendrickson released “A Policy at Peace with Itself: Antitrust Remedies for Our Concentrated, Uncompetitive Economy.” The report emphasized that “antitrust is not merely an object of scholarly concern; it has also become an important political talking point.” The two authors ALAN REYNOLDS is a senior fellow at the Cato Institute.

soon added a follow-up in the Harvard Business Review, “What the Future of U.S. Antitrust Should Look Like.” This article is a critical review of the evidence cited in these calls for a “new antitrust.” Specifically, I examine claims regarding the effects of past mergers on prices in several industries, assertions that corporate profitability is evidence of increased market concentration, and estimates of concentration of large firms in broad sectors such as retailing and service, interpreted as diminished competition. I’ll end this article with some cautionary lessons from the antitrust suits against IBM (1970s–1980s) and Microsoft (1990s–2000s) about the dangers of confusing imaginative prosecutors with technological forecasters, or of assuming that tech firms with an early lead on some innovation have an invincible advantage over new rivals. In the renowned 2004 study “Does Antitrust Policy Improve Consumer Welfare? Assessing the Evidence,” Brookings Institution scholars Robert Crandall and Clifford Winston found “no evidence that antitrust policy in the areas of monopolization, collusion, and mergers has provided much benefit to consumers and, in some instances, we find evidence that it may have lowered consumer welfare.” But consumer welfare is not what drives populist/progressive Better Deal enthusiasts. Since the Chicago School shifted the emphasis of antitrust to consumer welfare, complains Pearlstein, “courts and regulators narrowed their analysis to ask whether it would hurt consumers by raising prices.” Pearlstein would like courts and regulators to pay more attention to “leveling the playing field.” Kahn likewise argues that “undue focus on consumer welfare is misguided. It betrays legislative history, which reveals that Congress passed antitrust laws to promote a host of political economic ends.” The trouble with grounding policy on legal precedent and political ends, however, is that Congress has passed many laws to promote the special interests of producers at the expense of consumers. Some examples include the creation of the Interstate Commerce Commission (1887), the National Economic Recovery Act (1933), the Robinson–Patman Act (1936), the creation of the


Civil Aeronautics Board (1938), price supports for farm and dairy products, and numerous tariffs and regulations designed to benefit influential interest groups and the politicians who represent them. The harm that these initiatives did to consumers is now well understood, though not by all politicians and journalists. That raises the question of why we should unleash another round of consumer harm on the public.



University of Pisa economist Nicola Giocoli examines the economic analysis of antitrust over the period 1939–1974, when it was dominated by the structure–conduct paradigm of Harvard’s Edward Mason and his student Joe Bain. By the 1950s and 1960s, that “Harvard School” approach evolved into a more rigid “structuralist” view that market concentration (“oligopoly”) could be assumed to facilitate collusion and therefore high prices and profit. Harold Demsetz later showed the lack of evidence for that hypothesis, which helped to advance a consumer-focused price theory approach, dubbed “The Chicago School of Antitrust Analysis” by Richard Posner.

/ Regulation / 25

Kahn claimed the Chicago School’s “consumer welfare frame has led to higher prices and few efficiencies,” citing a collection of studies John Kwoka discusses in his 2014 book Mergers, Merger Control, and Remedies. Galston and Hendrickson praise the book as “a comprehensive study of recent mergers.” In reality, 10 of the book’s 42 “recent” mergers happened between 1976 and 1987, and 21 others happened in the 1990s. Those old studies were mainly focused on very few industries, including airline and railroad mergers enforced by the Department of Transportation and the Surface Transportation Board, rather than the Justice Department or Federal Trade Commission. Senator Schumer as well as Galston and Hendrikson allude to “recent” airline fares as a reason for tougher antitrust, even though five of the seven airline mergers in Kwoka’s book occurred in 1986–1987, and the other two in 1994. Pearlstein notes that Kwoka’s list of higher prices blamed on mergers includes “hotels, car rentals, cable television, and eyeglasses.” The goods on that list look as old-fashioned as Kwoka’s definition of “professional journal publishing” as involving print only, ignoring electronic publications. Hotels now face stiff competition from Airbnb; rentals cars from Uber; cable companies face “cord-cutting” alternatives such as broadcast HDTV, satellite

26 / Regulation / SPRING 2018 ANTITRUST

providers DirecTV and Dish, and internet providers such as Roku, Netflix, Amazon, Hulu, and more. The claim that eyeglass maker Luxottica controls 80% of U.S. optician chain sales ignores the sales made by thousands of independent optometry practices, huge retailers Walmart and Costco, and online retailers Zenni Optical and Warby Parker. It is difficult to imagine how Pearlstein or Kwoka could seriously suggest consumers face monopoly pricing from such industry leaders as Southwest Airlines, Marriott hotels, Enterprise Rent-A-Car, or Costco Optical. The most recent merger in Kwoka’s compilation of supposedly cartelizing mergers was in 2006 when Whirlpool outbid Haier to acquired Maytag. Any suggestion that Whirlpool gained monopoly power from that merger, however, was dealt a decisive blow on January 22, 2018 when the U.S. government imposed 20–50% tariffs on washers and dryers from LG and Samsung. (See “Putting 97 Million Households through the Wringer,” p. 8.) Unless we know what prices would have been if some nowcriticized merger had not occurred, we cannot attribute subsequent price changes to a merger rather than rising input costs, quality improvement, or inflation. To get around that, the study of the clothes dryer market used by Kwoka relies on a differencein-differences simulation that claims the merger raised the price of new-model Whirlpool dryers (but not Maytags) by 17%. That is because prices of the newest (but unimproved?) Whirlpool dryers rose more than the prices of stovetops, chosen as the control. Prices of clothes washers were unaffected and those of older-model appliances fell. Even accepting all that, how could anyone possibly calculate an estimated “average” price increase for the merger as a whole? When Kwoka goes on to heroically meld 27 such complex price simulations into a single “average,” the figure has no clear meaning without weighting, and without standard deviation data to show the estimates are significant. That is one reason FTC economists Michael Vita and David Osinski argue that “Kwoka has drawn inferences and reached conclusions … that are unjustified by his data and his methods.” A subsequent exchange between Kwoka and Vita failed to resolve grave doubts about Kwoka’s widely publicized conclusions that mergers have been shown to frequently result in higher prices and lower quality, or that the FTC and DOJ have become laxer about merger enforcement in recent years. In any case, Kwoka’s opinion that merger guidelines should be tightened has nothing to do with Ip’s notion that “Facebook, Google, and Amazon need to be broken up or regulated as Standard Oil and AT&T once were.” Standard Oil’s dominance was not ended by breaking up the trust into Rockefeller-owned regional parts (three of which became Exxon-Mobil). It was ended because electricity displaced kerosene for home lighting, and most gasoline for cars soon came from newcomers in Texas and California. The suggestion that federal regulation was ever an alternative to competition is also misinformed. Regulating industries “like utilities” meant banning competition in airlines in the decades before

the industry’s 1978 deregulation, in telephones from 1913 to AT&T’s breakup in 1983, and in the Postal Service’s First-Class mail even today. Federal regulation meant new entrants were prohibited and cutting prices was a federal offense. IF IT’S PROFITABLE, IT MUST BE A MONOPOLY

Demsetz’ 1973 survey of the evidence found the Harvard School’s “market concentration doctrine” could not demonstrate a causal connection between concentration ratios (such as top-four firms’ share of a market) and profitability; Michael Salinger obtained similar results when he revisited the issue in 1990. Continued failures to find a reliable connection between higher profits and concentration led Kwoka and others to search instead for a connection between higher prices and market concentration. Craig Newmark’s survey of these newer price-concentration studies found “the depth of their problems seems to be neither widely nor fully appreciated.” He concluded that “those problems are serious enough that the price-concentration studies probably should be discarded just like the profit-concentration studies have been.” The latest reincarnation of the market concentration doctrine evades the repeated failures of profit-concentration studies by simply turning the issue upside down—to assert that high profits prove market concentration. In other words, high profits are assumed to be evidence of too much concentration, and concentration is in turn assumed to be evidence of a lack of competition, as in the 1950s and 1960s. The Economist cover story of March 16, 2016, titled “Too Much of a Good Thing,” offered the subtitle: “Profits are too high. American needs a giant dose of competition.” To make that point, the opening paragraph noted, “Last year America’s airlines made $24 billion.” That was bad timing because airline profits fell to $13.5 billion in 2016. But it was also a bad example because the most consistently profitable airlines are upstarts that began as small regional firms: Southwest, Jet Blue, and Alaska/Virgin. In the Investopedia article “Why Airlines Aren’t Profitable,” Greg McFarlane notes that “from 2002 to 2011, the three largest surviving legacy airlines—American, United, and Delta—each filed for bankruptcy.” A glaring irony of using profitability as proof of monopoly is that the firm most often mentioned as a target of the populist antitrust campaign is Amazon, which is barely profitable. And Apple, the most profitable U.S. company, is not among The Economist’s trio of Titans to be tamed. But inconsistency is no problem for antitrust populists, who seem equally comfortable arguing that low profits are also proof of monopoly. Kahn claims that low operating profits prove Amazon is “choosing to price below cost.” What low profits actually show is that Amazon has been plowing back its cash flow into capital expenditures, such as cloud computing, a movie studio, a grocery chain, and innovative consumer electronics such as Kindle and Echo. A widely quoted 2016 Issue Brief from President Obama’s


Council of Economic Advisers (CEA) includes a graph from then-chairman Jason Furman showing large recent gains in “returns on invested capital” among public nonfinancial firms as calculated by the consulting firm McKinsey & Co. The Furman CEA claimed that this demonstrates a surge in “rents,” which was wrongly defined as returns “in excess of historical standards.” At McKinsey, however, Mikel Dodd and Werner Rehm explained that returns appear to be growing larger by their measure because invested capital as traditionally measured (plant and equipment) became smaller as the economy shifted from capital-intensive manufacturing to services and software. Traditionally defined physical capital also looks smaller because traditional accounting practices exclude or undervalue

/ Regulation / 27


Pearlstein wrote, “There is little debate that this cramped [Chicago School] view of antitrust law has resulted in an economy where two-thirds of all industries are more concentrated than they were 20 years ago, according to a study by President Barack Obama’s Council of Economic Advisers, and many are dominated by three or four firms.” That is not what the 2016 CEA Brief said. What it said was the largest 50 firms (not “three or four”) in 10 out of 13 “industries” (really sectors) had a larger share of sales in 2012 than in 1997. Pearlstein’s “two-thirds of all” means 10 out of 13, but the United States has more than 13 industries. In pointlessly broad sectors such as retailing, real estate, and finance, the top 50 firms had a slightly larger share of sales in 2012 than in 1997. The 50 largest in “retailing” accounted for 36.9% of sales in 2012, said the CEA, but that combines McDonald’s, Kroeger, Home Depot, and AT&T Wireless as if they were colluding competitors. Should we fear monopolistic price gouging simply because 50 firms account for a larger share of the nation’s enormous number of retail stores, real estate brokers, or finance companies? Of course not. In making these claims, the CEA and The Economist compared the quinquennial Census of Business data for 1997 and 2012. Galston and Hendrickson cite those two data points as evidence that “concentration is rising throughout the economy” and “competition across the economy is in decline.” Note how they mingle the word “sectors” with the word “industries” in the following passage:

Traditionally defined physical capital also looks smaller because traditional accounting practices exclude or undervalue intangible capital such as R&D, design, training, and market branding through advertising.

intangible capital such as research and development, design, training, and market branding through advertising. Accountants commonly count investments in intangibles as operating expenses, which makes invested capital look low and return on capital high. Moreover, official government estimates do not confirm Furman’s alleged surge in returns on invested capital. A December 2017 Bureau of Economic Analysis report notes that “the profitability of domestic nonfinancial corporations declined for a second year in 2016, but remains above the low point in 2009.” The after-tax rate of return on capital for 14 major nonfinancial industries ranged from 6.3% in 2002 to 8.3% in 2012, falling to 7.6% in 2016. The Economist, Galston and Hendrickson, and others point to a graph of pretax corporate profits rising to 11–12% of gross domestic product since 2010 as evidence of rapidly declining competition. Yet the most obvious of many problems with comparing such profits to GDP is that profits are increasingly global while GDP is domestic. Nearly 70% of Apple’s recent profits were foreign, up from 40% a decade ago. Urooj Kahn, Suresh Nallareddy, and Ethan Rouen find the reason combined foreign and domestic profits have grown faster than GDP is the widening gap between U.S. and corporate tax rates before 2018, “incentivizing firms to invest these profits abroad or hold them as cash.” Foreign profits tell us nothing about domestic competition. Neither Apple’s high profits nor Amazon’s low profits are evidence of monopoly.

An Economist analysis of 893 industries identifies the market share held by the four largest firms within each. It finds that between 1997 and 2012, two-thirds of industries became more concentrated. During this period, the weighted average share of the top four firms in each sector rose from 26 percent to 32 percent…. Research led by MIT economist David Autor confirms this trend. Between 1982 and 2012, the top four firms in the six major sectors of the U.S. economy became steadily and significantly more concentrated. In the manufacturing sector, the sales concentration ratio of the top four firms increased from 38 percent to 43 percent. Retail trade saw its sales concentration double to 30 percent from 15 percent, wholesale trade a change from 22 percent to 28 percent, services a rise from 11 percent to 15 percent, finance a substantial increase from 24 percent to 35 percent, and utilities and transportation a bump from 29 percent to 37 percent.

The North American Industry Classification System (NAICS) uses two- or three-digit codes to designate broad categories of

28 / Regulation / SPRING 2018 ANTITRUST

loosely related industries as “sectors” (retailing) or “subsectors” (restaurants). Four-digit codes constitute broad groups of similar industries, and five- and six-digit codes are industries. The cited paper by David Autor et al., like the CEA Brief, is based on useless two-digit sectors. Galston and Hendrickson propose a “structural presumption” against “excessive sectoral concentration.” Yet economists have no theory or evidence to suggest that four-firm concentration ratios (or Herfindahl–Hirschman indexes) within entire sectors could possibly explain competition, pricing, profits, or anything else within relevant markets—which are usually local and include imports. Sectoral concentration ratios are ably challenged in Carl Shapiro’s forthcoming paper “Antitrust in a Time of Populism,” which Galston and Hendrickson mentioned but apparently didn’t appreciate. The Economist’s 893 “industries” are four-digit industry groups, which are more precise than Autor’s “six major sectors” or the CEA’s 13. Consider that the two-digit code “52” designates the category of “Finance and Insurance” firms, whereas the four-digit subcategory “5211” designates the Federal Reserve and “5232” designates the New York Stock Exchange—both of which are improbable targets for antitrust. The four-digit NAICS groups are still problematic; consider that drug stores, grocery stores, and wholesale clubs have separate four-digit codes, but people buy drugs and groceries at all three. Wired, wireless, and satellite telecommunications are also artificially separated by four-digit NAICS codes. The Economist’s concentration ratio of the top four firms in “full-service restaurants” rose from 8% in 1997 to 9% in 2012, notes Shapiro. Yet nobody would suggest the number of local restaurant choices is shrinking or that new ones aren’t popping up constantly. The Economist’s national four-firm concentration ratios may suggest the biggest restaurant, supermarket, or hotel chains grew bigger, but that tells us nothing about competition in these inherently local markets. Galston and Hendrickson write: “In the past two decades, the pace of concentration has been accelerating. The Fortune 500’s revenue as a share of GDP has increased from 58 percent to 73 percent.” Those figures are from a FiveThirtyEight post by Andrew Flowers, and “past two decades” means the period 1994–2013. Although Fortune also compares the Fortune 500 revenues to GDP (saying it is “two thirds,” not 73%), that is a senseless ratio. More and more of the 500’s revenues have been foreign sales, not domestic. Gross revenues also involve double-counting because the 500 firms buy from each other. And, as Flowers noted, “the ratio of Fortune 500 profits to all corporate profits shows … no real upward trend.” Galston and Hendrickson present a graph from Census’s Business Dynamics Statistics showing enterprise entry and exit rates. This is said to demonstrate “a breakdown in the competi-

tive process [because] … U.S. startup formation rates have fallen dramatically over the last thirty years and … more are failing.” The Census Bureau estimates that there was a total of 6,786,097 business establishments in the United States in 2015, so yearly variations in such a huge number can’t be defined as changing concentration. Besides, the entry rate averaged 12.2% from 2002 to 2007, unchanged from the 12.3% average from 1978 to 2015. The graph just shows the Great Recession shrunk the new busi-

Kahn’s wordiest arguments for antitrust suits against large tech companies are not about facts but about theories and predictions. She makes a plea for predictive punishment based on omniscient futurism.

ness startup rate to 9.3% in 2010. The number of establishments in finance, insurance, and real estate fell from 487,868 in 2009 to 430,364 in 2011 because of the mortgage/housing crisis, not a breakdown in the competitive process. USING ANTITRUST TO PREDICT AND MANAGE TECHNOLOGY

The old market concentration doctrine is now being used to divert antitrust zeal from Schumer’s mundane original targets— such as “cable bills and airline tickets”—to a select trio of hightech companies (Ip’s Amazon, Facebook, and Google, or Lina Khan’s Amazon, Apple, and Google). But Facebook isn’t profitable because it has a large “share” of some finite market for social media apps. The Economist’s February 17, 2016 feature on “The Rise of Superstars,” depicted Snapchat as “a fast-disappearing” app slaughtered by Facebook. Yet Snap’s user base doubled from 46 million to 94 million between the fourth quarters of 2014 and 2015 according to statistics portal Statistica, and doubled again to 187 million by late 2017. There is no market for social media. Facebook users are free to be users of Twitter, Snapchat, and as many other such sites as they like. Kahn’s wordiest arguments for antitrust suits against large tech companies are not about facts but about theories and predictions. She makes a plea for preemptive punishment based on omniscient futurism. “The current market is not always a good indication of competitive harm,” she writes. Antitrust enforcers “have to ask what the future market will look like.” Pearlstein likewise thinks that antitrust authorities should have authority to “block Amazon” from competing too effectively with UPS, Oracle, or Comcast in the future. How could antitrust enforcers’ predictions about what might or might not happen in the future be deemed a crime or cause for civil


damages? If the law allowed courts to levy huge fines or break up companies on the basis of prosecutors’ predictions about the future, the potential for whimsical damages and political corruption would be almost limitless. We have already experienced extremely costly federal (and European) antitrust cases based largely on incredible predictions about “what the future market will look like”—most obviously in the cases against IBM and Microsoft. IBM was the subject of 13 years of antitrust “investigation” (harassment) before the suit was finally dismissed “without merit” in 1982. Pearlstein imagines “it was the government’s aborted [botched?] prosecution of IBM … that made Microsoft possible.” But IBM’s decisions to offer three operating systems for the PC and allow Microsoft to sell MS-DOS to Compaq had nothing to do with the government’s antitrust crusade against the firm. (The PC wasn’t even available until August 1981.) Instead, the case was about IBM’s dominance in data processing. That crusade was a well-funded project of Control Data, Honeywell, NCR, and Sperry Rand, competitors of IBM’s that hoped to do better in court than they had with customers. An article on the IBM case that I wrote for Reason in 1974 concluded:

made it easy to install any or all of them on PCs, tablets, and phones regardless of the operating system. Open-source VLC soon became a popular media player and open-source Firefox a popular browser. Microsoft shut down its MSN Messenger in 2014 and Windows’ share of smartphone operating systems (a technology that the DOJ considered irrelevant at the time of the Microsoft suit) recently fell to nearly zero. Internet tracker Statcounter estimates that Google Chrome has a 48.8% share of U.S. browser use, Apple’s Safari a 32% share, and Windows Edge and Internet Explorer combined have a 9.1% share. Kahn urges that antitrust now shift focus to Microsoft’s previously unnoticed rivals. “Google, Apple, and Amazon have created disruptive technologies that changed the world,” she writes. “But the opportunity to compete must remain open for new entrants and smaller competitors that want their chance to change the world.” Rather than offering evidence that new entrants are somehow excluded from markets supposedly dominated by Google, Apple, and Amazon, Pearlstein offered this summary of Khan’s theoretical anxieties: Chicago antitrust theory is ill equipped to deal with high-tech industries, which naturally tend toward winner-take-all competition. In these, most of the expenses are in the form of upfront investments, such as software (think Apple and Microsoft), meaning that the cost of serving additional customers is close to zero…. What this “post-Chicago” economics shows is that in such industries, firms that jump into an early lead can gain such an overwhelming advantage that new rivals find it nearly impossible to enter the market. [My emphasis.]

There is an irreconcilable conflict between helping competition and helping competitors. Many firms are quietly making a lot of money in data processing by providing better products at lower prices. Others find it easier and more lucrative to sue—proving once again that antitrust laws seek victims without crimes.

The Microsoft case was another example of antitrust prosecutors trying to reengineer technology to make life easier for a successful firm’s rivals. “In May 1998,” notes Pearlstein, “U.S. attorneys general filed an antitrust suit against Microsoft, which lurks in the background of the current debate.” Microsoft was accused of extending its legal dominance in PCs to achieve a monopoly on internet browsers and “middleware” (e.g., media players, email clients, and instant messaging) that could supposedly serve as “alternative platforms” to Windows in some incomprehensible fashion. In reality, the internet itself has proven to be the alternative, and it is platform-independent. Online services don’t know or care which operating system you use to fill out tax returns or which media player you use to watch movies. And you don’t need Android devices to use Google Docs. The government’s technologically illiterate case against Microsoft (heavily promoted by IBM, Intel, SUN, AOL, etc.) became a decade-long, ever-changing battle waged by prosecutors and judges who were unable to even imagine that Apple, Amazon, and Google could be competitive rivals of Microsoft in hardware, software, or services, or that cellphones and tablets could serve as handy computers. The Microsoft settlement barred Microsoft from discouraging computer manufacturers from pre-installing non-Microsoft software on new computers, including buggy “bloatware.” But the browsers, search engines, or media players preloaded on Windows and Apple devices have been a non-issue because broadband

/ Regulation / 29

The assertion that early entrants into high-tech can gain “such an overwhelming advantage that new rivals [find] it nearly impossible to enter the market” is often referred to as “network effects.” (See “Debunking the ‘Network Effects’ Bogeyman,” Winter 2017–2018.) Bruce Kobyashi and Timothy Muris call it the “possibility theorem.” Anything might be possible in theory, but the “post-Chicago” claim that early tech leaders can’t be challenged has been repeatedly proven false. Consider: In personal computers, Apple, Commodore, and Sinclair were first, followed by Apollo and the IBM PC in 1981, Osborne and Sun in 1982, Compaq in 1983, and Dell in 1984. Contrary to what trustbusters predicted, IBM gave up on this market in 2005 when it sold the whole ThinkPad business and brand to the Chinese firm Lenovo. ■■ Netscape had an overwhelming dominance in internet browsing in 1995, but that did not deter Opera and Internet Explorer from entering the market that year, nor Firefox in 2002, Safari in 2003, or Google Chrome in 2008. ■■ AOL was the dominant internet portal in 1993, but was challenged by Netscape in 1994, Yahoo in 1995, and later by Comcast, Google, Facebook, and many more. ■■

30 / Regulation / SPRING 2018 ANTITRUST ■■ AltaVista,

Lycos, Infoseek, HotBot, Excite, and Yahoo were meta-search engines that “jumped into an early lead,” yet they were soon trumped by Google, Ask, and Bing. Search Engine Watch reports that Google had a 72.5% share of global meta-search in mid-2016, followed by 10.4% for Bing and 7.8% for Yahoo. But meta-search is only part of a much larger universe that includes specialized vertical search engines such as OpenTable, TripAdvisor, Match, Yelp, Houzz, and Expedia, and also comparative shopping engines such as Amazon, eBay, PriceGrabber, NexTag, and Shopzilla. ■■ Palm, Nokia, and Motorola jumped into an early lead in personal digital assistants (PDAs) and cellphones, yet were shoved aside by Blackberry, which in turn was shoved aside by Samsung and iPhone. Symbian was the most widely used PDA operating system in the world until 2010, when it was overtaken by Android. ■■ iTunes jumped into the early lead in online music, supposedly creating nearly “predatory” competition for compact discs. But in 2011, Savage Beast went public as Pandora streaming service and Stockholm’s Spotify launched its U.S. service. ■■ Friendster, Linked-in, and Myspace jumped into an early lead in social networking in 2002–2003, followed by Google’s surely invincible social network Orkut in January 2004. Yet Facebook was not afraid to jump into that “market” in 2004, nor were YouTube and Reddit in 2005, and Twitter in 2006, followed by Google+, Snapchat, Tumblr, Instagram, Pinterest, etc. CONCLUSION

Khan would not only have antitrust czars prosecuting cases based on their technological predictions, but would have them “overseeing concentrations of power that risk precluding real competition.” This “structuralist” approach removes all annoying requirements for evidence that competition is impeded in any way. All that would be needed is a prosecutor’s perception that any concentration of undefinable “power” might someday risk some undefinable vision of “real competition” or otherwise harm some undefinable “public interest.” Kahn’s proposed carte blanche antitrust mandate is an invitation to “political and ideological mischief,” a former antitrust official told Pearlstein. The potential for abuse is obvious and dangerous, as Steven Salop and Carl Shapiro explain in a 2017 paper. President Trump threatened Jeff Bezos with “a huge antitrust problem” because Amazon owns the Washington Post, complaining that “he’s using that as a tool for political power against me.” If the Democrats’ hopes for stepped-up discretionary antitrust enforcement follows the advice of Pearlstein and the Open Markets Institute, it would add paralyzing uncertainty to business plans and decisions. This vision of unbridled antitrust activism is a recipe for judicial caprice. It would encourage interest group meddling

in business planning and pricing, invite political corruption, and R risk replacing the rule of law with the rule of lawyers. READINGS ■■ “A Policy at Peace with Itself: Antitrust Remedies for Our Concentrated, Uncompetitive Economy,” by William A. Galston and Clara Hendrickson. Brookings Institution Report, January 5, 2018. ■■ “Amazon’s Antitrust Paradox,” by Lina Khan. Yale Law Journal 126: 710–805 (2017). ■■ “Antitrust for Fun and Profit,” by Alan Reynolds. Reason, April 1974. ■■ “Antitrust in a Time of Populism,” by Carl Shapiro. International Journal of Industrial Organization, forthcoming. ■■ “Antitrust Policy Relies More Heavily on Beliefs Rather than a Strong Consensus about Facts,” by Sam Peltzman. ProMarket (blog) April 7, 2017. ■■ “Benefits of Competition and Indicators of Market Power.” Council of Economic Advisers Issue Brief, April 2016. ■■ “Big Business Is Getting Bigger,” by Andrew Flowers. FiveThirtyEight, May 18, 2015. ■■ “Chicago, Post-Chicago, and Beyond: Time to Let Go of the 20th Century,” by Bruce H. Kobayashi and Timothy J. Muris. Antitrust Law Journal 78: 505–526 (2012). ■■ “Does Antitrust Policy Improve Consumer Welfare?” by Robert W. Crandall and Clifford Winston. Journal of Economic Perspectives 17(4): 3–26 (2004). ■■ “John Kwoka’s Mergers, Merger Control, and Remedies: A Critical Review,” by Michael Vita and F. David Osinski. Working paper, December 21, 2016. ■■ “Kwoka’s Mergers, Merger Control, and Remedies: Rejoinder to Kwoka,” by Michael Vita. Working paper, January 29, 2018. ■■ “Labor’s Share of GDP: Wrong Answers to a Wrong Question,” by Alan Reynolds. Cato@Liberty (blog), October 26, 2017. ■■ “Mergers, Merger Control, and Remedies: A Response to the FTC Critique,” John Kwoka. Antitrust Institute, March 2017. ■■ Mergers, Merger Control, and Remedies: A Retrospective Analysis of U.S. Policy, by John Kwoka. MIT Press, 2014. ■■ “Microsoft’s Appealing Case,” by Robert A. Levy, and Alan Reynolds. Cato Institute Policy Analysis no. 385, November 9, 2000. ■■ Predatory Pricing in Antitrust Law and Economics: A Historical Perspective, by Nicola Giocoli. Routledge, 2014. ■■ “Price-Concentration Studies: There You Go Again,” by Craig M. Neumark. Presentation to the DOJ/FTC Merger Workshop, Concentration and Market Shares Panel, February 14, 2004. ■■ The Causes and Consequences of Antitrust: A Public Choice Perspective, by Fred S. McChesney and William F. Shughart. University of Chicago Press, 1995. ■■ “The Chicago School of Antitrust Analysis,” by Richard A. Posner. University of Pennsylvania Law Review 127: 925–948 (1978). ■■ “The Concentration–Margins Relationship Reconsidered,” by Michael Salinger. Brookings Papers on Economic Activity: Microeconomics, 1990. ■■ “The Fall of Labor Share and the Rise of Superstar Firms,” by David Autor, David Dorn, Lawrence F. Katz, et al. Working paper, May 1, 2017. ■■ “The Market Concentration Doctrine,” by Harold Demsetz. AEI–Hoover Policy Study no. 7, August 1973. ■■ “The Role of Taxes in the Disconnect between Corporate Performance and Economic Growth,” by Urooj Kahn, Suresh Nallareddy, and Ethan Rouen. Harvard Business School Working Paper 18-006, 2017. ■■ “What the Future of Antitrust Should Look Like,” by William A. Galston and Clara Hendrickson. Harvard Business Review, January 9, 2018. ■■ “Whither Antitrust Enforcement in the Trump Administration?” by Steven Salop and Carl Shapiro. The Antitrust Source, February 2017. ■■ “Why Airlines Aren’t Profitable,” by Greg McFarlane. Investopedia, March 17, 2014.


FAST, RELIABLE, FREE. PolicyBot™ is The Heartland Institute’s online database and search engine offering reviews and the full text of more than 25,000 articles and reports from 350 think tanks and advocacy groups. PolicyBot™ – fast, reliable, and free. Visit

<Based on 12pt


The Heartland Institute is a 28-year-old national nonprofit organization based in Chicago. Its mission is to discover, develop, and promote free-market solutions to social and economic problems. For more information, visit our Web site at or call 312/377-4000.

<Based on 7pt I

<Based on 12pt


<Based on 7pt F

32 / Regulation / SPRING 2018 BANKING & FINANCE

Handicapping Financial Reform Will President Trump and new Fed chair Jerome Powell share an ambitious vision of reform?



hat are the key shortcomings in the current financial regulatory structure, and which reforms will be adopted to address them in the next two years? I admit that I don’t know the answer to that second, more important, question. But I can explain why the stakes are high and how successful reform might be achieved.


The point of financial regulation is to make the financial system healthy and strong so that it can promote growth, wealth creation, and stability in the real economy. From that perspective, recent regulation has been a flop. For example, many banks in North Carolina closed in the last decade—not just during the 2007–2008 financial crisis, but after the Dodd– Frank Act reforms that were intended to prevent a similar crisis in the future. More generally, we see a declining market share for small banks, a lack of entry into banking, persistently low market-to-book values for banks (though they are starting to improve), higher charges for customers (service fees are up 111% since Dodd–Frank), weak loan growth for small and medium-sized businesses during the recovery, millions more unbanked Americans, and declines in credit card accounts of about 15%. One particularly startling development: many large banks for the first time in history have refused deposits because they are too costly for them to maintain on their balance sheets. CHARLES W. CALOMIRIS is the Henry Kaufman Professor of Financial Institutions at Columbia University’s Business School, director of the school’s Program for Financial Studies and its Initiative on the Future of Banking and Insurance, and a professor at Columbia’s School of International and Public Affairs.

Scores of academic studies have convincingly shown that these problems are attributable to regulatory policy. I am editing a special issue of the Journal of Financial Intermediation that contains eight new studies by academics and researchers within the Federal Reserve System and the Office for Financial Research. They show how regulation has produced these and other costs and unintended consequences for the financial system and the economy. One important unwitting consequence has been the growth of “shadow banking”: unregulated intermediaries that substitute for regulated banks. While it is not clear that shadow banking is a bad thing per se, that growth reduces the effectiveness of regulation because it removes financial activity from regulated intermediaries. And it is clear that the growth of shadow banking has been driven by the costs of new regulations. For example, high-risk credit card customers fled to non-bank providers of consumer credit when the regulation of risk pricing by credit card banks prevented them from offering cards to risky customers. When the Financial Stability Oversight Council (FSOC)—a creation of Dodd–Frank—limited large banks’ ability to supply “leveraged loans” in support of private equity deals in an attempt to prevent such lending, non-banks took over that business dollar-for-dollar, rendering the FSOC’s efforts futile. Such examples abound. Is it possible to argue that Dodd–Frank reforms create benefits that justify these costs and other unintended consequences? Have we established new rules that will prevent destructive crises from occurring again? No. In fact, one of my primary themes is that the failings of regulation have not just been high costs; we also are not getting much in the way of benefits in exchange for those costs. In particular, we have not solved the systemic risk problems we should have solved because we have failed to enact policies that will credibly reduce the chance of a recurrence of a major financial crisis. There were two key regulatory structure shortcomings that


drove the 2007–2008 financial crisis: government subsidization of housing finance risk and inadequate prudential regulations (especially capital regulations that failed to keep banks far away from the insolvency point by maintaining adequate capital buffers). Despite the hundreds of major new regulations and the enormous new complexity and compliance costs, these two key problems have not been fixed. It is true that banks currently fund themselves more by equity than before and they hold more cash assets than before. But those facts say little about the resiliency of the banking system. The relevant question is, the next time we have a recession and a major asset price correction, will banks reliably find themselves with adequate capital buffers and cash assets? In my view, the answer to that question remains no.



The Dodd–Frank Act required the development of new regulatory standards for mortgages. Those standards were the legislation’s only attempt to deal with the problem of destabilizing government subsidization of mortgage risk, which was at the heart of the financial crisis. Note that the government-sponsored

/ Regulation / 33

President Donald Trump and new Federal Reserve Board chairman Jerome Powell.

enterprises (GSEs) Fannie Mae and Freddie Mac failed and were put into conservatorship in September 2008, but their status was not changed by any reforms since then. Two new mortgage standards were envisioned by Dodd–Frank to keep mortgages from becoming too risky again. First, lenders issuing qualified mortgages (QM) would be given a “safe harbor” from liability under the Truth-in-Lending Act as amended by Dodd–Frank. This was meant both to discourage the origination of risky mortgages as well as to help less sophisticated consumers identify low-risk mortgages (as defined by regulators). Second, the “qualified residential mortgage” (QRM) was created as part of a broader rule on credit risk retention (also known as “skin in the game”). Credit risk retention was intended to discourage the securitizers of mortgage-backed securities (MBS) from misleading investors by including excessively risky mortgages in the asset pools backing the securities. It was supposed to do so by requiring securitizers to retain a significant unhedged interest in the credit risk related to the securities’ underlying assets, thereby giving securitizers a strong incentive to be mindful of the risks. It

34 / Regulation / SPRING 2018 BANKING & FINANCE

was also intended to benefit unsophisticated consumers by reducing the incentives for the mortgage originators to offer excessively risky mortgages. Dodd–Frank specified that securitizers retain at least 5% of the mortgage asset pool, but mortgages that fit the definition of a QRM were exempted from the 5% requirement. Further ensuring MBS securitizers’ ability to avoid retaining credit risk, all mortgages bought by the Federal Housing Administration (FHA) or by the housing GSEs were automatically considered QM- and QRM-compliant, no matter what their characteristics. The QM and QRM standards therefore created a huge opportunity for the FHA and the housing GSEs to dominate the mortgage market because only they could avoid the legal barriers and economic risks associated with purchasing mortgages that would not otherwise meet the QM or QRM standards. As if the FHA/GSE exemption were not enough to neutralize any effect from the QM and QRM standards, the agencies tasked with setting these standards caved in to heavy lobbying by the so-called Coalition for Sensible Housing Policy, which consisted of housing industry groups, mortgage brokerage groups, and urban activist groups that were opposed to limiting government subsidies for mortgage risk. The process by which the debasement of the QM and QRM standards took place has been described by New York University political scientists Sanford Gordon and Howard Rosenthal as follows: As rulemaking proceeded, the central policy issues boiled down to whether a down payment requirement would be included in the QRM standard and, to a lesser degree, the maximum debt-to-income ratios for borrowers. In the end, the regulators caved and aligned QRM with the more relaxed standards [the Consumer Financial Protection Bureau] had crafted for QM— eliminating the down payment requirement altogether and raising the debt-to-income ratio maximum to 43 percent.

Even former congressman Barney Frank, the House sponsor of Dodd–Frank, ended up lamenting the undoing of credit risk retention and quality standards through the relaxed QM and QRM standards and the GSE exemption, which he described as “the loophole that ate the standard.” Around the same time that the Coalition for Sensible Housing Policy was undermining the QM and QRM standards, thenpresident Barack Obama replaced Edward DeMarco, the prudent and courageous head of the GSEs’ regulator, the Federal Housing Finance Agency (FHFA), with former congressman Mel Watt. Immediately upon assuming authority, Watt reduced the down payment limit on GSE-eligible mortgages from 5% to 3%. The GSEs remain in conservatorship and the combination of QM and QRM rules and exemptions, lax FHFA standards, relaxation of FHA mortgage insurance premia, and continuing government funding of the GSEs, along with the operations of the FHA and the Veterans Administration’s housing finance program, continue to ensure the government’s control of housing finance and heavy subsidization of housing finance risk.

The government-resumed push for risky housing finance since 2013 already has resulted in an escalation of mortgage risk. At the end of July 2017, 32% of first-time buyers had debt serviceto-income ratios in excess of the QM limit of 43%, which is 8 percentage points higher than it was less than three years earlier. Fannie Mae, Freddie Mac, the FHA, and the VA all hold riskier mortgage portfolios than banks, and they account for about 96% of purchased mortgage volume. CAPITAL REGULATION

What about capital regulation? Let me remind you that in December 2008, when it was kaput, Citigroup had a very high regulatory capital ratio of 12%, but at the same time Citigroup’s market value of equity relative to its market value of assets (MVE/MVA) was below 2%, reflecting the market perception that it was insolvent. Citi’s MVE/MVA had been 13% in early-2006, but it declined fairly steadily over a two-and-a-half-year period. By September 2008, Citi’s and several other banks’ perceived insolvency made it impossible for them to roll over their short-term debts, resulting in a systemic crisis. It wasn’t Lehman Brothers’ headline-generating collapse that caused the financial crisis, as is commonly assumed; Lehman was a match in a tinder box of high counterparty risk. If banks had maintained high MVE/MVA, Lehman’s failure would not have produced a systemic collapse. There is no question that capital regulation of banks has become stricter since the crisis. But the new capital regulation has done nothing to prevent a recurrence of a collapse in the economic values of bank equity alongside the maintaining of the book values of equity. The doubling down on book value capital regulation by U.S. regulators and the international Basel Committee ignores the problem and constitutes a strange attempt to make sure that every bank will be just as safe and sound as Citi was in December 2008. Requiring some amount of book value of equity relative to assets doesn’t work for several reasons: asset loss recognition by supervisors is often delayed on purpose (a practice known as forebearance); risk weighting of assets at the time of origination is manipulated by banks to exaggerate their capital ratios; and—most importantly—banks are service companies, not balance sheets: their economic value reflects forecastable changes in their cash flows, not their tangible net worth. For example, when Citi and other big banks’ stock prices plummeted between 2007 and 2009, that reflected market perceptions not only of losses on tangible assets (such as MBS and loans), but also of reduced positive cash flows unrelated to assets on the books, without commensurate declines in expenses. Banks maintain branches to attract low interest paying deposits. But in a low interest rate environment in which all deposits pay nearly 0%, branches merely add cost, which is reflected in a hit to the market values of assets and equity. Similarly, revenues from various fees (for example, for mortgage servicing) also contributed to the declining value of bank equity. Research that I have done with my Columbia Business School colleague Doron Nissim shows that forecastable changes in bank cash flows—often unrelated to changes in the values of tangible


assets and liabilities—drive collapses in the market value of equity relative to the book value of equity (MVE/BVE). These changes in cash flows occurred during the crisis for understandable economic reasons, and it is likely that similar patterns will recur in a future downturn that coincides with major asset price declines. It follows that what is needed alongside book value regulation is to make it impossible for banks and bank regulators to once again stand by for two years pretending that economic value has not disappeared even when it obviously has. Along with many other financial economists, including Richard Herring at Wharton and former SEC chief economist Mark Flannery, I have been arguing in support of preventing a recurrence of the recent crisis by linking prudential regulation to the economic value of bank equity. The basic idea is to create strong incentives for banks to maintain a meaningful equity buffer by creating an “equity thermostat” that prods them to raise equity in the market whenever a medium-term moving average of their MVE/MVA falls below a threshold, say 10%. This approach has other desirable features. It would permit us to reduce regulatory costs by simplifying prudential regulation in several respects. It would make regulation more transparent and accountable. And it would reward banks that are relatively efficient creators of economic value. In the past several years, some of the large U.S. banks have maintained MVE/MBE more than double their peers, but prudential regulation does nothing to recognize or reward their higher levels of economic value creation and consequent stability. What about the new forward-looking stress tests that large banks face under Dodd–Frank? Do they ensure that banks will maintain their true economic value of equity relative to assets? No, stress tests are largely an exercise in measuring prospective shocks to the values of tangible assets, and success is measured in terms of exiting the shock with a high book value of equity-to-assets ratio. What about Dodd–Frank’s Title II and its new resolution powers given to the Federal Deposit Insurance Corporation? Won’t those prevent disruptive failures of banks in the future, as advertised? No, Title II is not a workable or credible means of speedy resolution. Furthermore, it institutionalizes too-big-tofail bailouts by providing a road map for how bailouts will occur and how they will be funded if orderly liquidation turns out to be infeasible, as it almost surely will. In summary, the crisis taught us that we need to stop subsidizing risky mortgages and that we need to require banks to maintain significant capital ratios measured in economic—not just book—value. From the perspective of these lessons that should have been learned, Dodd–Frank gets an “F.” Its failure is not just the high compliance costs that it has produced, but also its not solving the two key problems it should have addressed. SHOULD REFORM’S FAILURE BE A SURPRISE?

What have we learned from history about how to manage suc-

/ Regulation / 35

cessful reforms? First and foremost, we have learned not to be surprised by failure. Failures of post-crisis reforms have been the rule. The United States consistently has been the most crisisprone economy in the world for the past 200 years. Major banking crises have occurred 17 times in U.S. history, and in many cases those crises produced ambitious reforms intended to fix the purported problems that produced the crises. For example, major prudential legislation after the banking crises of the 1980s (in 1989 and 1991) promised to fix inadequacies in capital regulation, but failed to do so. The explanation for failure of reform is explored in my 2014 book with Stephen Haber, Fragile by Design. We develop a framework for thinking about what we call the Game of Bank Bargains, which explains how political actors shape financial regulatory outcomes and why winning political coalitions sometimes allow banking systems to be predictably unstable. We examine the histories of bank regulatory change in five countries, including the United States. We show that reforms fail because typically the same political coalition that had created the regulations that produced a crisis remain in charge of fixing things after the crisis. That is a discouraging insight. If political coalitions are blocking desirable reform, is it all hopeless? Where are we today? What is the realpolitik that we need to bear in mind when thinking about crafting reform? Below are three key aspects of the current political environment that are particularly relevant going forward. Banking ain’t aluminum /

Simplistic political theories of regulation tend to see regulated industries as the primary architects of their own regulation. The argument is simple: industry interests—say, aluminum producers—are willing to spend more time and money organizing themselves to influence regulation than consumers are willing to spend on such efforts. The effects on consumers from bad aluminum regulation are too diffuse for any of them to spend time and money lobbying on aluminumrelated issues. So the industry controls its regulation. However, history shows that this is not the right model for banking regulation, especially in a populist democracy like the United States. Part of the winning political coalition that shapes U.S. banking regulation has always been some subset of bank borrowers. Unlike aluminum consumers, bank borrowers actively organize and lobby for regulations that favor them. Agricultural land owners did so in the 19th and early 20th centuries, and so have activist urban groups in the late 20th and early 21st centuries. Lesson #1: If a reformer ignores powerful borrower groups when crafting reforms, those reforms will fail. Populism matters / Bank regulation usually is not on the minds of voters, but it is after a crisis. Post-crisis resentment of bankers was at a peak after 2008 and remains surprisingly strong. Lesson #2: It will not be possible to institute changes in regulation that are perceived by voters as soft on bankers without creating a significant political backlash.

36 / Regulation / SPRING 2018 BANKING & FINANCE Congress and the president aren’t the only decisionmakers / For better or worse, the rise of the administrative state over the past 130 years—and especially the increase in the discretionary power of financial regulators over the past two decades—has meant that most of the decisionmaking about regulation now occurs outside of legislation. Note that the Dodd–Frank legislation did not specify the prudential capital requirements that I was complaining about earlier; Dodd–Frank delegated the task of setting those rules to regulators, especially the Federal Reserve. The Consumer Financial Protection Bureau (CFPB), created by Dodd–Frank, does not rely on congressional appropriations to pay its expenses. It spends whatever it pleases and is funded by the Fed without limit. The FSOC can shut down any financial firm in the United States that it deems to be a threat to the financial system. The FHFA regulates the housing GSEs and can unilaterally set new binding standards for mortgage risk. These agencies and many others wield enormous power without having to seek approval for their rules or actions from Congress or the executive branch. I hasten to point out that this is not the system of government the Founders intended. In Federalist #47 James Madison wrote, “The accumulation of all powers, legislative, executive, and judiciary, in the same hands … may justly be pronounced the very definition of tyranny.” The powerful administrative agencies create complex regulations on the basis of vague legislative mandates, they have the power to enforce their rules, they often run the tribunals to which one must turn to protest their rules, and they also fund themselves with fees or other sources of income that do not require congressional approval. Furthermore, their power has grown recently, as they have increasingly employed discretionary “guidance” rather than formal rulemaking as the means of regulating the financial system. Formal rulemaking must adhere to procedural standards for the consideration of comments and to the clear standards laid out in the 1946 Administrative Procedures Act. Guidance, in contrast, requires no rulemaking, solicits no comments, entails no hearings, avoids defining violations, specifies no procedures for ascertaining violations, and defines no penalties that will be applied for failure to heed the guidance. This affords regulators great flexibility. Regulatory guidance can be extremely vague, effectively allowing regulators to determine what violates compliance standards after the fact. This invites abuse of regulatory power. That is especially true in the banking system where the law requires communications between regulators and banks to remain confidential; banks often aren’t permitted to share the content of supervisors’ comments with outsiders. Regulators employing guidance can avoid public statements explicitly requiring banks

to do something, but can privately threaten banks with an array of instruments of torture that would have impressed Galileo, using secrecy to avoid accountability. This is not a hypothetical problem for financial regulation, but a clear and visible one exemplified by several important recent abuses of guidance. There is, however, a positive side to the burgeoning power of administrative agencies over the financial system, which I take to be Lesson #3: In our current paralyzed political environment, where deep changes in regulation produced by legislation seem unlikely, changing the leadership and thinking at the administrative agencies offers an alternative to legislation as a means of achieving reforms. Note, however, that as the aforementioned Gordon–Rosenthal study of QM and QRM standards shows, administrative agencies are not immune to

The rise of the administrative state, and especially the increase in the discretionary power of financial regulators, has meant that most of the decisionmaking about regulation now occurs outside of legislation.

political pressure; any reforms instituted through administrative agencies must recognize the importance of the first two lessons (the power of organized borrower groups and the widespread public resentment of Wall Street). Why do I say that legislative action to achieve meaningful financial reform is currently unlikely? Recall that the House Financial Services Committee drafted an ambitious bill—the CHOICE Act—that, while imperfect in many respects, identifies and addresses many of the fundamental challenges that need to be faced. It passed the House but was dead on arrival in the Senate, where 60 votes are needed for passage. There is no interest by Senate Democrats in most of the reforms identified in the CHOICE Act, and little interest by some Senate Republicans. The hard-working, thoughtful chairman of the House Financial Services Committee, Jeb Hensarling, is not seeking reelection this year. I take that as a pretty strong endorsement of my view that legislative reform of bank regulation is not currently on the menu. Furthermore, the Trump administration’s Treasury Department drafted its own much more modest proposals for banking reform. They focus mainly on reducing some regulatory costs, but this too seems to have insufficient support in the Senate, where it was attacked as a gift to Wall Street. A bill proposing a modest regulatory reform (raising the asset threshold for stress tests and other regulatory scrutiny from $50 billion to $250 billion) recently cleared the Senate Banking Committee. But the meager ambition of that bill confirms my view that deep regulatory reform is not feasible in the Senate.



In light of the three lessons about current political obstacles to reform, what are the prospects for reform over the next two years? If President Trump were interested in reform, what could he do, and through what means? I am optimistic about what can be accomplished if Trump and the new Fed chair, Jerome Powell, both share an ambitious vision of reform. Call this “contingent optimism.” I don’t know if they do share such a vision, but together they could accomplish a great deal. The most important reasons for contingent optimism are the empty seats around the table at the Federal Reserve Board, as well as President Trump’s recent appointment of Randall Quarles as the Fed governor with central responsibility over financial reform. Quarles is viewed by some as a bit of a get-along-to-go-along establishment Republican, but recently he expressed interest in a top-to-bottom review of regulation. If President Trump were to appoint others to the Fed and FDIC boards that support an agenda of reform, much could be achieved. The Fed Board could simplify and reform liquidity and capital requirements, and make capital standards take into account the economic value of capital. The Fed could redesign stress tests for banks to be more transparent and more meaningfully based on cash flow performance. The Fed could move away from its reliance on guidance in favor of formal rulemaking, which would reduce regulatory risk. The Fed could institute a more systematic and transparent framework for monetary policy. The Fed, in alliance with the Treasury, could ensure that FSOC macro-prudential policies conform to a systematic framework. All of these changes would constitute an enormous improvement. Not only could Powell and new governors change the Fed directly through their leadership, but he also wields substantial power through his appointments of senior staff at the Fed Board. Powell could appoint senior reform-minded financial economists to head up the staff’s efforts, and bring in a reform-minded legal scholar (someone like Harvard’s Hal Scott) to rewrite rules. A shakeup of the Board staff that would support serious reforms would be crucial, given the importance of the staff in shaping the information the Fed governors receive and their role in the practical execution of Board policies. The president could do more. The CFPB has been pursuing a deeply politicized and divisive policymaking strategy, crafted by its former head, Richard Codray. Now that Codray is out, Trump could improve CFPB policies by appointing a new head that would realize its important mission: informing consumers and protecting them from unfair practices. President Trump will also be able to replace Watt at the FHFA in January 2019 with someone who would lower loan-to-value limits of GSE mortgages back to 95%. That would rein in the mortgage risk explosion that began four years ago. The political constraints I labeled before as the first two political lessons for reform, paradoxically, could be helpful by pushing

/ Regulation / 37

reform to be more ambitious and balanced. I see the political energy coming from the populist resentment of banks and the power of the housing interests as potentially helpful drivers of regulation. The reason is simple: only a Grand Bargain that takes into account political interests can possibly succeed and lead to sustainable improvements. Changes in Fed or other financial regulatory policies that seek only to cut the costs of regulation will be hard to sustain in the current political environment. But an approach that cuts costs while also simplifying regulation and strengthening it in important ways to make capital regulation more credible could win support from quarters that otherwise would oppose it. Similarly, even progress on housing policy through traditional legislative means may be possible if it is sufficiently ambitious. Legislation that seeks only to rein in mortgage risk subsidies (such as closing the GSEs) likely would not be feasible politically, but such actions would be more likely to succeed if they were combined with other policies that provide better means to the same ends—for example, means-tested down payment matching for low-income first-time homebuyers (a policy that works well outside the United States to promote homeownership without promoting mortgage risk). In other words, I believe that the most successful, sustainable path for regulatory reform is one that doesn’t just focus on deregulation or regulatory costs, but one that also seeks to strengthen some regulations. A broader reform agenda that seeks the right kind of bipartisan logrolling might work better than a narrow deregulating agenda. Perhaps I am an incorrigible optimist, but I believe that if the president and the new Fed chair are interested, there are important reform opportunities at hand. I believe they would succeed best by presenting a reform agenda that strengthens capital regulation while simplifying it, that relies on pro-reform appointments at the Fed Board and other powerful administrative agencies to achieve most of their immediate goals, and that identifies new ideas for bipartisan legislation on housing reform that would take into account a broad range of constituencies. With the right R vision and leadership from the top, much is possible. READINGS ■■ “Crisis-Related Shifts in the Market Valuation of Banking Activities,” by Charles W. Calomiris and Doron Nissim. Journal of Financial Intermediation 23(3): 400–435 (July 2014). ■■ Fragile by Design: The Political Origins of Banking Crises and Scarce Credit, by Charles W. Calomiris and Stephen H. Haber. Princeton University Press, 2015. ■■ “How to Design a Contingent Convertible Debt Requirement That Helps Solve Our Too-Big-to-Fail Problem,” by Charles W. Calomiris and Richard Herring. Journal of Applied Corporate Finance 25(2): 66–89 (Spring 2013). ■■ “Political Actions by Private Interests: Mortgage Market Regulation in the Wake of Dodd–Frank,” by Sanford Gordon and Howard Rosenthal. Working paper, May 25, 2016. ■■ Reforming Financial Regulation after Dodd-Frank, by Charles W. Calomiris. Manhattan Institute, 2017.

38 / Regulation / SPRING 2018 BANKING & FINANCE

Regulating Banks by Regulating Capital Instead of trying to block banks’ ability to make foolish decisions, regulators should require that they have ample capital to pay off their creditors.



scar Wilde once observed: “Moderation is a fatal thing. … Nothing succeeds like excess.” Way too many regulators take this observation to heart. They seem to think that if a little bit of regulation is necessary, more must be better. But the inevitable fact is that while regulations may or may not create benefits, they come with considerable costs. And so a balancing of costs and benefits—a certain moderation—is an essential adjunct to the modern regulatory state. Or maybe not. It turns out that one particular type of regulation may provide substantial benefits with almost no costs. For the past 10 years or so—at least since last decade’s financial crisis—a number of respected experts in economics and finance have argued that simply requiring banks to maintain much higher levels of capital can solve many of the problems of systemic risk without imposing serious costs on the banks. In other words, they think we can trash most of the expensive, complex, and clunky rules that now govern banks and substitute a much simpler rule: “Rely on more capital and issue less debt.” And by “more capital” they mean levels that—in the world of modern finance—seem like crazy, excessive, immoderate amounts. If these academics are correct, this is a really big deal. Everyone agrees that our current system of bank regulation, headlined by the Dodd–Frank Act, is very, very costly. Almost no one believes that this system has done all that much to prevent future crises. Replacing Dodd–Frank with rules demanding that banks limit their debt and rely more on equity just might give us a much safer financial system at a much lower cost.

MICHAEL L. DAVIS is a senior lecturer in the O’Neil Center for Global Markets and

Freedom, Cox School of Business, Southern Methodist University.

These ideas have now been advanced into legislation. Specifically, the proposed Financial Choice Act, currently approved by the House and awaiting action in the Senate, allows banks the opportunity to avoid certain types of regulation by increasing their capital. PROTECTING “FOOLS” AND THEIR MONEY

Figuring out whether this idea can really work is, admittedly, both technical and tedious. It is also important. The Great Recession cost the economy at least $22 trillion. Even if you don’t think the failures in the financial sector “caused” the downturn, you must accept that financial failures made it much worse. Effective and efficient policies that prevent such failures in the future would be most welcome. So how can changing the rules on capital transform the messy world of bank regulations? Bank capital / Capital is most easily measured by a borrower’s “equity ratio,” which is the proportion of the value of assets not owed to the people who loaned the borrower money. If you have a $200,000 house but you owe the bank $100,000, your equity ratio is 50%. In other words, capital is just the part of the business that can be claimed by the owners of the firm. It’s what’s left over after all the debt holders get paid. And, by corollary, the higher the ratio, the more confidence creditors will have that they will be repaid. This calculation is confusing for banks because there are all kinds of ways to count the value of a bank’s assets. By any measure, banks have ridiculously low equity ratios given the risks they face. Back in the 1800s, they tended to look like many other sorts of businesses, with equity ratios of about 50%. Beginning about 100 years ago, though, that ratio began to shrink. At the start of last decade’s financial crisis, bank equity ratios averaged about



4%. They’ve gone up some since 2008, but most banks still have a ratio of 8% or less. By comparison, the equity ratio across all U.S. corporations averages around 64%. Even more telling, some of the most dynamic and successful firms in the country—firms like Apple and Google—have no debt at all, meaning their equity ratios are 100%. Capital and financial decisions / Before the financial crisis, some banks and other financial institutions made some very foolish decisions, like assuming that housing market prices would never stop rising, and loading up on complex financial instruments they didn’t understand. Unfortunately, requiring higher equity ratios won’t magically stop such decisions. Foolishness (or, to speak more charitably, “optimism bias” or “herd mentality”) is part of the human condition, even for bankers and finance wizards. However, increasing capital means that the right people— namely, bank shareholders and their top managers—will have to pay for such decisions. When some Wolf-of-Wall Street wannabe goes to work for a bank trading derivatives he doesn’t understand, a 4% equity ratio

/ Regulation / 39

can disappear very quickly. When that happens, the bank’s debt holders—who also tend to be a clueless lot—take a hit. And when they don’t get paid back as quickly as they expected, they can’t in turn pay back their own debt holders. If things really get crazy, the whole system unravels until we get—well, exactly what we got in 2008: a frozen financial system followed by a miserable recession. But higher equity ratios would mean that the owners of the firm will be positioned to absorb a big hit without then passing that hit onto their creditors and the rest of the financial sector and the broader economy. Why protect debt holders? / Much of the public—meaning much

of the consumer base that drives this economy—holds debt. If you have a bank account, you are a debt holder—the bank invests your deposits in order to earn a little interest. This is true even of your checking account; when you make a deposit, you’re making the bank a loan—and not just any kind of loan, but a loan that you can demand to be repaid at a moment’s notice. Did you do a credit check on your bank? Did you look at its financial statements for the past 10 years and carefully evaluate

40 / Regulation / SPRING 2018 BANKING & FINANCE

the audit letter? Admit it: you have no idea what exactly your bank is doing with your money. For all you know, they’re investing it in grape jelly futures and don’t have a chance of paying you back. But since your account is fully insured by the Federal Deposit Insurance Corporation, you don’t worry about what the bank is doing with your money because you’re guaranteed to get it back. Deposit insurance is something almost everyone believes is right, just, and good. No one in the mainstream is seriously proposing that we run the system without the backstop of deposit insurance. Bank runs—the mass withdrawal of deposits by suddenly fearful depositors, which can shutter even the most responsible banks—are bad and we can’t trust the little depositors to do the kind of careful risk management that would prevent runs. (Most libertarians will tell you that such insurance doesn’t need government support and that if the FDIC closed, private deposit insurance would be offered. They’re probably right, but feel free to ignore them; everyone else does.) The problem is that the government doesn’t just see to it that the small-fry depositors and debt holders get their money back. It guarantees that pretty much all the debt holders will get their money back—even large financial entities that have the ability to do due diligence. Interestingly, almost no one in government will admit this. All the stern and serious people in the Treasury and Federal Reserve claim that if a bank starts to go under, the big-shot financiers who loaned the bank money will just have to go down with the ship. But when a crisis actually happens, they bail them out along with the rest of the debt holders. Every. Single. Time. This creates some very bad incentives for banks and other financial actors. Not only do debt holders have no reason to monitor their banks, but the fact that their debt either comes with the explicit guarantee of deposit insurance or the implicit guarantee of a government bailout means that banks can borrow money very cheaply. Quite rationally, they want to load up on as much debt as they can because that lets them make more loans and thus earn more interest. And that is exactly what they do. Think for a moment about how weird this is. Alphabet, the parent company of Google, is a great company with solid earnings and fantastic growth prospects, yet it would probably pay more to borrow money than a broken-down bank in Atlantic City. A COSTLY LECTURE

This behavior can’t be rectified with a stern lecture from officials and some government-mandated bank “stress tests.” But those are basically all that Dodd–Frank does. It lays out a long list of bank do’s and don’ts that stretches across some 2,300 pages of legislation and countless volumes of semi-official interpretations and guidances, but they provide little more protection against banker foolishness than the physical paper they’re written on. The legislation also requires banks to suffer all manner of examinations, including the dreaded “stress test.” Complying with all this is terribly expensive, but we don’t really know just how costly it is. Several of the largest banks report that

complex regulations require them to hire thousands of compliance officers at a cost of over $100 million per year. The smaller community banks, which often have the hardest time complying with the rules, have seen their market shares decline by over 12% since 2008. This decline isn’t all due to Dodd–Frank, but it is a contributor. WHY EQUITY REQUIREMENTS ARE BETTER

Instead of such a costly and questionable regulatory effort, an equity requirement would create a financial system where these debt holders—or depositors or whatever you want to call them— are at little risk of not being paid back. In an ideal world, no government intervention would be necessary. In such a world, the small depositors would be protected by private deposit insurance and the uninsured debt holders—the big guys—would act like grownups and be careful about where they put their money. But that’s not the world we live in. In the real world all the debt holders know government will see to it that they get their money back, so they care little about the financial risks they take with their money. This means the banks can borrow cheaply and so they load up on debt. This leaves us with two choices: we can deal with this risktaking by either micromanaging banks by imposing all sorts of regulations to make sure they almost never fail, or we can require banks to have high equity ratios so that when they do fail the debt holders will be unharmed. The “micromanage banks” approach is what we’re doing now. It’s why Dodd–Frank is so complicated and costly. The “make banks have high equity ratios” approach is just implementing principles from Finance 101: if a bank has a high equity ratio, the value of the assets can decline a lot before the debt holders are threatened. On the other hand, if the equity ratio continues to remain at the low levels it’s been for the past century, relatively small problems will continue to fall on the shoulders of the debt holders. Capital would thus protect debt holders from the sad consequences of bad management and a crazy economy. Hurting the banks? / Earlier in this article I claimed that forcing banks to hold more capital wouldn’t really hurt them economically. This seems false; if banks hold more capital, they surely would have less money to lend out. And that, in turn, would hurt individuals and small businesses because they would be less able to borrow money, expand their consumption or inventories, purchase equipment or other long-lived goods, use higher-skilled labor, and so forth. Indeed, whenever someone proposes requiring banks to raise their equity ratios, bank officials recite this parade of horribles. But these scary stories shouldn’t dissuade us. Capital—primarily bank shareholders’ money—is a source of funds, not a use of funds. Banks can and do treat the money they raise from the equity holders the same way they treat the money they get from the debt holders: they use some of it to pay the bills and lend out


the rest. Depending on equity instead of debt didn’t stop Apple from growing and it won’t stop a good bank from lending. Bankers would likely object to this theory, claiming that if they have to raise equity ratios, it will be more expensive for them to raise money. But in an ideal world—a world where the financial markets operate without government interventions—changing the equity ratio wouldn’t have any effect at all on the cost of capital. This insight comes from the famous work of Franco Modigliani and Merton Miller, who argued more than a halfcentury ago that a firm’s value is determined by its earning power and the risk of its assets, and is independent of how it finances its investments or distributes its investments. Of course, banks don’t operate in that world. The fundamental problem—and I can’t stress this enough—is that government guarantees bank debt. That means banks are able to borrow from the debt holders at a rate much lower than they’d have to pay if they got money from shareholders. And so, yes, if we force banks to raise more money from the shareholders, this would make it more expensive for banks to raise money. But that’s a good thing for two reasons. First, the government’s guarantee is not free. It costs all of us to provide this backstop that reduces banks’ and depositors’ need to do due diligence. That cost probably is very high, though it’s not easy to calculate. If banks raise funds by issuing less debt and more equity, the guarantee will be irrelevant and those costs will go away. Second, if banks have higher equity ratios, they don’t need as much scrutiny and supervision. Remember, all the expensive bits of Dodd–Frank are there to protect the debt holders, not the owners of banks. If the debt holders are protected because the banks are holding lots of capital, banks don’t need as many compliance officers and the government doesn’t have to do as many of those expensive examinations. When you think about the cost of the financial system, you can’t just think about banks’ cost of capital. All of those regulatory costs—especially the cost of the implicit loan guarantees— matter. So yes, if banks have to maintain higher equity ratios, their cost of capital will go up. But the overall cost of running the system will go down—maybe by several billion dollars.

/ Regulation / 41

to support their views, but they’ve also made some useful suggestions about how to deal with the problem. Specifically, they would prefer that the capital requirements be raised slowly so as to give banks a chance to adjust. They’ve also made some intriguing suggestions about encouraging banks to use some forms of convertible debt that is not guaranteed. The proposed Financial Choice Act would also give banks a choice: they could raise lots of capital and thus not be bound by some of the current Dodd–Frank rules, or they could continue with the status quo: depend mostly on debt and then endure the tight scrutiny of the regulators. If it turns out that maintaining a higher equity ratio really does destroy a bank’s ability to do business, then banks that chose that option could revert back to the status quo and dealing with the regulators. Congressional bungling / Of course, Congress could ultimately adopt a version of the Financial Choice Act that is loaded up

The cost of the financial system involves more than the banks’ cost of capital. If banks have to maintain higher equity ratios, their cost of capital will go up, but the overall cost of running the system will go down.


There is broad agreement among banking and finance scholars that forcing banks to increase equity ratios would make banks safer. But some people worry that this would lead banks to reduce their lending. In other words, they think the advocates of more capital don’t fully appreciate how much this rule would increase a bank’s cost of funds. These skeptics have some respectable theory and evidence

with fine print and details that undermine the virtues of higher equity. For instance, lawmakers could adopt an insufficiently low equity ratio for the banks. Right now they are talking about a ratio of 10%. There is some academic research suggesting that might be enough, but the evidence is ambiguous. Stanford finance professor Anat Adamati and Max Planck Institute economist Martin Helwig argue for equity ratios of 20–30%. Former University of Chicago finance professor and now Hoover Institution senior fellow and Cato adjunct John Cochrane likes to say that capital requirements should be “enough so that it doesn’t make any difference”—that is, so high that there’s no worry whether they’re high enough. Adamati, Helwig, and Cochrane are taking the Oscar Wilde stance: success through excess. Even if you don’t want to go that far, we should err on the side of requiring a bit too much capital, rather than too little. The one thing we can’t have is an unregulated banking system that still doesn’t have enough capital to prevent systemic risk. People also worry that if the capital requirements are changed, some other financial institutions will arise to do some of what the banks have been doing. If these institutions don’t have to obey the rules and they become big enough to matter—if they create “systemic risk”—then we still have a problem. We saw something

42 / Regulation / SPRING 2018 BANKING & FINANCE

like this in the financial crises with the so-called “shadow banks.” This is a real concern but it is also one we can solve. The term “shadow banks” is misleading; these institutions aren’t really “in the shadows” because anybody who was paying attention to the world of finance knew they were there. If we see this kind of regulatory arbitrage—new arrangements developing to take advantage of loopholes in the law—we should be able to close the loophole. If lawmakers make this policy choice, they also need to get the technical details right, most importantly the proper definition of bank capital. Accountants and financial types are particularly good at creating different ways to define what is and is not capital. With all regulation, complexity creates opportunities for evasion. Columbia University finance professor Charles Calomiris makes a persuasive case that the standards need to be based on market values of equity, not book value. (See “Handicapping Financial Reform,” p. 32.) Bankers may not like that, but forcing them to respond to market signals seems like an obviously good idea. Of course, it’s hard to predict whether the proposed Financial Choice Act will become law and, if it does, in what form. But it is a serious, intelligent idea that is being debated and discussed by serious, intelligent people. And that—in the current political climate—is a surprisingly good thing. ww_ad_reg11-10_1/2pg.qxd


10:51 AM

READINGS ■■ “A Simple Proposal to Recapitalize the U.S. Banking System,” by Kevin Dowd. Heritage Foundation, February 28, 2017. ■■ “An Analysis of ‘Substantially Heightened’ Capital Requirements on Large Financial Institutions,” by Anil K Kashyap, Jeremy C. Stein, and Samuel Hanson. Working paper, May 2010. ■■ “An Empirical Economic Assessment of the Costs and Benefits of Bank Capital in the US,” by Simon Firestone, Amy Lorenc, and Ben Ranish. Board of Governors of the Federal Reserve System, Divisions of Research & Statistics and Monetary Affairs, Finance and Economics Discussion Series 2017-034, March 31, 2017. ■■ “Bank Capital Regulation: Theory, Empirics, and Policy,” by Shekhar Aiyar, Charles Calomiris, and Tomasz Wieladek. IMF Economic Review 63(4): 955–983 (2015). ■■ “Benefits and Costs of Bank Capital,” by Jihad Dagher, Giovanni Dell’Ariccia, Luc Laeven, et al. International Monetary Fund Staff Discussion Note SDN/16/04, March 2016. ■■ “Equity Financed Banking in a Run-Free Financial System,” by John H. Cochrane. Presentation given to the Federal Reserve Bank of Minneapolis, May 2016. ■■ The Bankers’ New Clothes: What’s Wrong with Banking and What to Do about It, by Anat Admati and Martin Helwig. Princeton University Press, 2013. ■■ “The Cost of Capital, Corporation Finance, and the Theory of Investment,” by Franco Modigliani and Merton Miller. American Economic Review 48(3): 261–297 (1958). ■■ “The Minnesota Plan to End Too Big To Fail,” published by the Federal Reserve Bank of Minneapolis, November 16, 2016.

Page 1

The proposals before Congress have long-term effects on our nation's budget — and potentially yours. delivers the numbers behind federal legislation in the form that matters most: the price for you and your family. Let be your starting point for investigating what the government does with your money, and for taking action to control it. Whatever your viewpoint, you will have more power to effect change. Subscribe to the Digest, free e-mail notice of new legislation — and, most importantly, its price tag.

freedom? Subscribe to Reason!

Trump Cracks Down on Refugees in Nashville's Little Kurdistan

Whiplash and Backlash in Cuba

“Reason magazine has been like a breath of intellectual fresh air in a polluted atmosphere.” —Milton Friedman, Nobel laureate in economics

$3.95US/CAN OCTOBER 2017

Interview with Fox News' Kennedy

Government Almost Killed the Cocktail

Hemp Comes Home to Kentucky

44 / Regulation / SPRING 2018 I N T E L L E C T UA L P R O P E R T Y

The Patent System At a Crossroads A looming Supreme Court decision could either continue or reverse the erosion of intellectual property rights.



or approximately four decades starting in the late 1930s, the U.S. patent system was in a state of distress. Courts widely invalidated patents, antitrust risk cast a cloud over licensing arrangements, and antitrust enforcers issued compulsory licensing orders covering some of the country’s largest patent portfolios. While our current patent system has not yet reached the nadir of that period, it stands at an inflection point at which it is strongly tending in that direction. Over the past decade, virtually every branch of government has taken steps to degrade patent security. Starting with the Supreme Court’s 2006 decision in eBay v. MercExchange (which limits the availability of injunctive relief in patent litigation), approximately three-quarters of its patentrelated decisions have disfavored patent holders. Lower courts have broadly interpreted eBay to effectively institute an implied working requirement that generally denies injunctive relief to non-practicing patent holders. Supreme Court decisions in 2012, 2013, and 2014 have cast doubt on the validity of thousands of patents in the software, biotechnology, and medical diagnostic industries. Other Court decisions in 2008, 2015, and 2017 have limited transactional latitude in setting the terms of intellectual property licensing relationships. Enactment of the Leahy–Smith America Invents Act of 2011, followed by its implementation at the Patent and Trademark Office (PTO), has substantially expanded opportunities to challenge a patent’s validity through the inter partes review proceeding. Finally, the antitrust agencies have taken actions and some federal courts JONATHAN M. BARNETT is a professor at the University of Southern California,

Gould School of Law. This article is based in part on his forthcoming article, “Has the Academy Led Patent Law Astray?” (Berkeley Technology Law Journal, 2018).

have issued decisions that limit or effectively eliminate the ability of owners of “standard essential” patents—a key intellectual property asset in information technology markets—to pursue injunctions against infringers. The sum-total effect of this policy shift has been an erosion of the property-like nature of the patent right and a creeping reversion toward the weak patent regime that prevailed during the postwar period. This under-discussed development takes center stage in Oil States Energy Services v. Greene’s Energy Group, a potential landmark case that the Supreme Court will decide before the end of its current term. The case involves a constitutional challenge to the administrative inter partes review proceeding on the ground that it deprives patent holders of their right to a judicial proceeding in Article III courts. Resolving that question turns on another fundamental question: is a patent a private property right or a “mere” statutory obligation? The choice between these two forks in the long and winding road of patent law (the former leading to the federal courts, the latter to PTO tribunals), and some reasonable side-paths in between, will have profound implications for the property-rights institutions that govern technology markets. ADMINISTRATIVE VS. PROPERTY-RIGHTS VISIONS OF THE PATENT SYSTEM

The Oil States case raises a fascinating and infrequently encountered mix of patent and constitutional doctrines that has given rise to vigorous debate among legal scholars and other commentators. More broadly, however, Oil States presents a unique opportunity to reflect candidly and explicitly upon a fundamental distinction—and social choice—between two opposing visions of patent and innovation law and policy. That choice is whether the allocation of resources to innovation should


be principally directed by market forces or by a regulatory and judicial apparatus. The Court’s patent-skeptical decisions—and supporting actions by Congress, the PTO, and the antitrust agencies—have attenuated patents’ property-like character. In doing so, the decisions have eroded an institutional predicate for market allocation, thereby implicitly favoring the administrative vision of an innovation economy. The emerging consequence is a legal regime in which patents are widely treated as statutory entitlements perpetually subject to repeated refinement or outright invalidation by courts and regulators. That “law-heavy” approach makes for an awkward fit with an innovation economy that relies on bottom-up market signals to allocate resources to innovation, rather than top-down, government-driven directives, whether through judicial, regulatory, or legislative action. Naturally, firms and other market players cannot securely negotiate and enter into intellectual property–intensive transactions in an environment in which dissatisfied parties can easily petition a court, tribunal, or agency to revisit a fundamental assumption of the deal after the fact. PROPERTY-RIGHTS EROSION IN INNOVATION MARKETS

Changes in property rules have implications for the structure and terms of market transactions. This basic principle can be seen in action as the weakening of the patent system incrementally shifts bargaining leverage from net technology producers to net technology users. Given little credible prospect of a “shutdown” injunction (the famous 1986 closure of Kodak’s instant camera division after having been found to infringe Polaroid’s patents now being a faint memory), it has now become popular in certain business and policy circles to endorse so-called “efficient infringement” strategies. For a midstream or downstream firm that relies on upstream technology inputs, this amounts to a unilateral policy of “use now, pay later,” with the payment terms to be negotiated in the shadow of the emergent quasicompulsory licensing regime in the courts and PTO tribunals. If the alleged infringer has greater litigation resources at its disposal than the patent owner (a scenario that is far from unusual), and an injunction is increasingly nothing more than a theoretical threat in a broad range of circumstances, then the licensing terms will be discounted accordingly. Contrary to assertions that simply assume that adjudicative processes can ascertain what is deemed to be a “reasonable”’ licensing rate and therefore make the patentee whole, this substitution of judicial and regulatory arbiters for the impersonal arbiter constituted by market forces is almost certainly a one-way ticket toward resource misallocation. As rigidly administered pricing through adjudicative processes displaces adaptive market-based pricing through negotiated contracts, judicial and other governmental decisionmakers will increasingly lack any real-world reference point by which to calculate patent damages in order to mimic hypothetically negotiated royalties. (Data point to consider: the congressionally deter-

/ Regulation / 45

mined rate for the compulsory “mechanical” license for musical compositions under the Copyright Act was fixed at 2¢ from 1909 to 1977.) For patent holders that have no reasonable prospect of securing injunctive relief (an increasingly large portion of the total patentee population), this is a compulsory licensing system in all but name only. EMPIRICAL SCRUTINY OF THE PATENT “PARADE OF HORRIBLES”

Notwithstanding the adverse side-effects inherent to an administratively oriented patent regime, it may be the case that a robust patent regime imposes other social costs that are sufficiently great so as to support taking meaningful steps away from the property-rights baseline that is a tried-and-true foundation for well-functioning tangible goods markets. Conventional wisdom implies that there is something different about markets for intangible goods or, at least, about deploying property rights in those markets. Judges, legislators, and regulators who have taken actions attenuating patent rights have widely relied on a governing narrative that attributes a parade of horribles to the reinvigorated protection of patents rooted in the 1980 enactment of the Bayh–Dole Act (which authorized recipients of federal research and development funding to seek patents on inventions developed using such funding) and the establishment in 1982 of the Court of Appeals for the Federal Circuit. Following that narrative, the PTO has issued too many and overly broad patents, the Federal Circuit has adopted doctrines that are overly protective of patents, and as a result the patent system has imposed an excessive tax on manufacturers, consumers, and follow-on innovators. If all that is true, it follows that patent strength and volume should be cut back accordingly. Remarkably, this narrative has never been soundly demonstrated through empirical inquiry. Some elements have never even been rigorously tested until recently. A growing body of research has now undertaken that task and the results cast substantial doubt on key assertions of the policy consensus. It is by now clear that empirical reality is more complex than has typically been assumed and, in some cases, is flatly at odds with the “problems” identified by the standard narrative. “Patent troll” problem / Conventional wisdom describes non-practicing entities (NPEs) as firms that acquire patents but do not then use them to produce goods; rather they license the patents to others and pursue legal action against those who decline the offer. Within this framework, NPEs are mere litigation shops that bring opportunistic suits to extract exorbitant settlements from operating businesses, purportedly imposing estimated costs in the order of tens of billions of dollars annually. (See “The Private and Social Costs of Patent Trolls,” Winter 2011–2012.) Yet recent research by Jay Kesan and David Schwartz has cast significant doubt on those estimates, which in any case are gross values that do not reflect the innovation gains that may arise


directly or indirectly from the availability of patent enforcement intermediaries such as NPEs. Relatedly, Christopher Cotropia, Kesan, and Schwartz show that entities that are typically placed under the broad brush of the “non-practicing” label pursue a variety of business models, some of which include significant R&D, licensing, and sometimes even sales activities, or they deliver monetization pathways to individual inventors and small firms that may be unable to do so independently.

“Patent thicket” problem / Conventional wisdom says that patent-

intensive markets get stuck in a “thicket” of conflicting legal claims, again raising prices and delaying innovation. Yet repeated surveys of biomedical researchers by John Walsh, Wesley Cohen, and others find that, since the onset of patenting in biotechnology and related fields starting in the early 1980s, these concerns have yet to materialize. My research has shown that, over a century running through the present, patent-intensive markets are consistently adept at anticipating thickets and then forming patent pools and cross-licensing structures that preempt or mitigate them. (The only exception to this pattern arose during

/ Conventional wisdom says that the PTO grants patents at unusually high rates (a widely repeated but now rebutted figure is 97%), implying that applicants face nominal hurdles in obtaining patent protection and casting doubt on the average quality of issued patents, especially as compared to other major patent offices. Yet recent research by Ron Katznelson and, separately, Michael Carley, Deepak Hedge, and Alan Marco shows that these assertions relied on simplified methodologies that overlook the unique complexities of the PTO examination process (due in part to applicants’ ability to file new, related applications known as “continuations”). Revised estimates using more refined methodologies and expanded datasets show substantially lower grant rates that significantly diminish concerns over a flood of “bad” patents. Strikingly, Carley, Hedge, and Marco find that, based on multiple measures, grant rates at the PTO declined between 1996 and 2005, contrary to the widely accepted narrative that the PTO has become excessively lenient toward patent applicants.


/ Conventional wisdom says that thousands of patents in the smartphone and other consumer electronics markets have burdened manufacturers with a heavy “royalty stack,” purportedly imposing onerous royalty rates that threaten to increase consumer prices or delay innovation. Related claims state that holders of critical standard-essential patents in these markets burden manufacturers and assemblers with exorbitant licensing demands—often estimated to reach into double-digit figures per device—that similarly threaten to inflate end-user prices and slow down innovation. My research shows that those assertions overlook evidence that major device manufacturers have typically negotiated crosslicensing offsets that substantially reduce any such royalty burden. In fact, recent empirical studies show that the estimated total percentage royalty in the smartphone market is in the low to mid single digits on a per-device basis. Relatedly, Alexander Galetovic, Stephen Haber, and Ross Levine show that, adjusted for quality, the prices of consumer electronics reliant on standard-essential patents have consistently declined since at least the late 1990s.

The accumulated and growing body of contrarian evidence poses a stiff challenge to conventional assumptions that have undergirded government actions and statements targeting propertylike attributes of the patent system. Contrary to popular belief, there is no firm factual basis to confidently assert that patent litigants are typically bringing opportunistic suits, that the patent system is regularly issuing low-quality patents, that smartphone and other consumer electronics markets are threatened with onerous royalty burdens, or that technology markets are stuck in a morass of patent claims that will frustrate innovation. If these “problems” are far less significant than had been thought to be the case, then much of “patent reform” starts looking like a solution in search of a problem. But that is not the only failing of the prevailing consensus. The conventional narrative not only overstates the vices of a strong property-like patent system, but largely neglects its virtues. Those virtues extend substantially beyond the standard incentive rationale behind intellectual property rights. That standard rationale focuses on the role of patents in motivating invention while overlooking the role of patents in facilitating and structuring the follow-on commercialization process leading to market release.

“Bad patents” problem

The accumulated and growing body of contrarian evidence poses a challenge to conventional assumptions that have undergirded government actions and statements targeting property-like attributes of the patent system.

“Royalty stacking” problem

the postwar period when antitrust law overzealously imposed a de facto prohibition on patent pools.) Remarkably, I found that these transactional capacities are especially well-developed in the multi-component information technology markets in which conventional wisdom anticipates that patent roadblocks would be most severe.


/ Regulation / 47

Patents play two key functions in that process. First, a strong patent system supplies a robust revenue mechanism for idea-rich but capital-poor innovators who would otherwise have difficulty protecting their innovations against second-movers. Business history shows that the large incumbent is often the second-mover, deploying its formidable financing, production, and distribution capacities to outmatch the entrepreneur-innovator who came up with the bright idea in the first place. Second, a strong patent system provides a reliable legal infrastructure for making markets in intellectual assets. Secure patents enable a scientist-founded biotech startup to attract capital from investors and negotiate partnerships with large pharmaceutical firms that can undertake the “heavy lifting” required to deliver new therapies to the medical marketplace. Secure patents enable a chip-design startup to negotiate relationships with independent chip manufacturers and bypass the capital requirements that would otherwise frustrate entry into a market once dominated by large, integrated firms. In these cases, patents are hardly a tax that stunts innovation and hurts consumers; rather, they are a tool that enables entrants to challenge incumbents.

encies’ calls for “patent reform” may also deliver short-term price reductions for consumers. (If reducing patent strength only reduces input costs, then it engineers a difficult-to-justify wealth transfer from upstream innovators to downstream implementers.) But this runs the risk of making a myopic social choice that forfeits future “macro” growth for present “micro” cost-savings. Replacing the wired telephone with a smartphone is the social goal of patent law—not taking a few cents off the existing wired telephone. Perhaps of greatest concern, substantially diluting the property-like attributes of patents endangers the viability of upstream R&D-intensive firms that often deliver the most dramatic innovations but require a secure intellectual property portfolio in order to monetize those innovations through commercialization relationships. As patent security falters, innovation is prone to turn inward as innovator-entrepreneurs retreat to the shelter of platform-based firms and other large integrated enterprises that can wield scale and scope to internalize returns from new technologies. That hardly seems like the recipe for an entrepreneurial R innovation economy in the 21st century.


■■ “A New Dataset on Mobile Phone License Royalties,” by Alexander Galetovic, Stephen Haber, and Lew Zaretski. Hoover Institution Working Group on Intellectual Property, Innovation, and Prosperity, Stanford University, Working Paper No. 16011, Aug. 1, 2017.

The patent system stands at a historical crossroads between the weak system of the postwar economy and the strong system that emerged in the early 1980s. Both the head of the U.S. Justice Department’s Antitrust Division, Assistant Attorney General Makan Delrahim, and the acting chairman of the Federal Trade Commission, Commissioner Maureen Ohlhausen, have separately called for revisiting recent patent-skeptical policies. These statements echo empirical work that has raised significant grounds to rethink conventional understandings of the realworld effects of the patent system. At this important juncture, the issues raised by Oil States illustrate the fundamental choice between a bottom-up, propertybased or top-down, administratively oriented approach toward patent and innovation policy. That choice is embodied by Oil States but, whatever the Court’s decision in this particular case, it will continue to drive debates over the direction of patent and innovation policy. In weighing that choice in different contexts, it will be worthwhile for policymakers and commentators to bear in mind the error committed by several decades of pre-Chicago antitrust jurisprudence, which repeatedly protected the interests of particular competitors at the expense of competition in general. Only the latter objective is consistent with consumer welfare from anything other than an extremely short-term perspective. Eroding patent security delivers an immediate gain by reducing the input costs of integrated manufacturers, platform firms, and other “implementers” located at midstream and downstream points on the technology supply chain. Unsurprisingly, these stakeholders have mostly (and successfully) advocated for curtailing patent strength. Depending on competitive pressures, heeding these constitu-


■■ “An Empirical Examination of Patent Holdup,” by Alexander Galetovic, Stephen Haber, and Ross Levine. Journal of Competition Law and Economics 11(3): 549–578 (2015). ■■ “Analyzing the Role of Non-Practicing Entities in the Patent System,” by David L. Schwartz and Jay P. Kesan. Cornell Law Review 99(2): 425–456 (2014). ■■ “Bad Science in Search of ‘Bad’ Patents,” by Ron D. Katznelson. Federal Circuit Bar Journal 17(1): 1–8 (2007). ■■ “Don’t Fix What Isn’t Broken: The Extraordinary Record of Innovation and Success in the Cellular Industry under Existing Licensing Practices,” by Keith Mallinson. George Mason Law Review 23(4): 967–1006 (2016). ■■ “Effects of Research Tool Patents and Licensing on Biomedical Innovation,” by John P. Walsh, Ashish Arora, and Wesley M. Cohen. In Patents in the Knowledge-Based Economy, edited by Wesley M. Cohen and Stephen A. Merrill, National Academies Press, 2003. ■■ “Patent Assertion Entities under the Microscope: An Empirical Investigation of Patent Holders as Litigants,” by Christopher A. Cotropia, Jay P. Kesan, and David L. Schwartz. Minnesota Law Review 99(2): 649–703 (2014). ■■ “Patent Rights in a Climate of Intellectual Property Skepticism,” by Maureen K. Ohlhausen. Harvard Journal of Law & Technology 30(1):1–51 (2016). ■■ “Take It to the Limit: Respecting Innovation Incentives in the Application of Antitrust Law,” by Makan Delrahim, Antitrust Division, U.S. Department of Justice. Remarks as prepared for delivery at USC Gould School of Law, November 10, 2017. ■■ “The Anti-Commons Revisited,” by Jonathan M. Barnett. Harvard Journal of Law & Technology 29(1): 127–203 (2015). ■■ “What Aggregate Royalty Do Manufacturers of Mobile Phones Pay to License Standard-Essential Patents?” by J. Gregory Sidak. The Criterion Journal of Innovation 1: 701–719 (2016). ■■ “What Is the Probability of Receiving a U.S. Patent?” by Michael Carley, Deepak Hedge, and Alan Marco. Yale Journal of Law & Technology 17(1): 203–223 (2014).

48 / Regulation / SPRING 2018 I N T E L L E C T UA L P R O P E R T Y

Miles to Go Before We Sleep Oil States is a fight over the use of administrative tribunals, not intellectual property rights.



atent wonks are ever-vigilant for a high- (or low-) water mark. For the past five years, the phrase “the pendulum is swinging back” has been thrown around often by these folks. It has been the title of talks, appeared in articles, and ended endless e-mails; one policy panelist even brought an actual pendulum onstage. It’s reliable policyspeak—if you can’t place some new decision you disagree with in context or you lack something meaningful to contribute, just vaguely allude to some push–pull pendulum dialectic and suggest that whichever policy direction you disfavor has reached an inflection point, or a nadir, or a high-water mark—pick your metaphor. Jonathan Barnett employs a similar metaphor in his article, “The Patent System at a Crossroads.” Though we don’t yet know the Supreme Court’s decision in Oil States Energy Service v. Greene’s Energy Group, he believes the decision may be a turn—or swing— away from the current course of patent reform—a reversal of his narrative’s sea change of coordinated judicial, congressional, administrative, executive, and market-based decisions. Thus, he ties together 20 years’ worth of changes in patent law with one neat bow and claims that Oil States puts us at a crossroads related to them all. He sees patent policy as a zero-sum battle between forces that want “strong patents” and those that demand “weak patents,” but things are not that simple. He and others seek to cast Oil States as some watershed moment in an inexorable march against the fundamentally good, strong patent. Doing so overstates patent law’s importance. Oil States isn’t about patents—it is the opening salvo in what promises to be a lengthy battle at the reconstituted Supreme Court between those who would seek to curtail the administrative state and those who would like to preserve its status quo. Four voices on the Court JONATHAN STROUD is chief intellectual property counsel at Unified Patents Inc., which advocates reducing non-practicing entity assertions in specific technology areas.

have made no bones about their defense of the old precedent and old system, most clearly in their defense of the many administrative law “deferences”—the Chevron, Auer, Skidmore, and other related doctrines. On the other side of the debate, new Associate Justice Neil Gorsuch has wasted no time planting flags, adopting a hard line against doctrines meant to preserve the independence of the “fourth branch” of government, the administrative state. He was critical of some key rulings in his confirmation hearings and in judicial opinions, and his writing and comments from the bench to date show those to be no flukes. One focus of that criticism is the use of administrative schemes, tribunals, or other types of decisionmaking as an alternative to Article III courts. Examples include the “Vaccine Court” administered by special masters within the U.S. Court of Federal Claims, the September 11th Victim Compensation Fund, Workers’ Compensation, the Tax Court, the “Hatch–Waxman” Act and “Biosimilars” Act schemes, the bankruptcy courts, and other federal tribunals, schemes, and decisionmakers empowered by Congress. They are meant to be much more efficient than the adversarial system of the Article III courts in disposing of certain matters (much to the chagrin of counsel). Enter, curiously, Oil States. There are a few very old Supreme Court cases, like McCormick, that taken out of context suggest that the federal grant of a patent must be properly adjudicated by (and only by) an Article III court. Would that it were so. But it’s complicated, as it so often is, and the quotes Professor Barnett uses don’t represent a sustained set of rulings, scholarship, or thinking the way he suggests. It’s a hope tied post hoc to an unwieldy wagon, as he reveals with his vague, dreamy talk of “crossroads” and “erosion,” as he attempts to paint Oil States as emblematic of opposition to patent reform. (The cert grant did expose a noticeable void in the scholarship—one arguing that patents are an absolute property right—a gap the academy is now rushing to fill.)


In doing so, he has overstated the question granted. This is a case about using administrative tribunals to adjudicate rights arising at common law. It doesn’t directly bear on whether the right itself is theoretically private property; a temporary, tightly regulated federal license; or something in-between. It’s about the nature of the tribunal. By way of explanation, the Supreme Court has a doctrine whereby “public rights” can be adjudicated by tribunals, whereas “private rights” that have sprung from the common law cannot be removed involuntarily from Article III courts (parties can, of course, resolve their differences). It is the doctrine best embodied in the concurrence in Murray’s Lessee v. Hoboken Land & Improvement Co., and the concurrence in Northern Pipeline, that a public claim arising in tort, equity, or admiralty cannot be withdrawn from the courts. It most recently arose in the form of a 2011 counterclaim in a bankruptcy court proceeding and it makes sense: you do not want to, and cannot, deprive an Article III court of a common-law claim for damages. That was the 5–4 ruling in Stern v. Marshall, authored by Chief Justice John Roberts (with Antonin Scalia concurring and Stephen Breyer authoring the dissent). It was and remains a close fight between two competing factions of the Court for whose test rules the analysis. So the appeal to this newly constituted Court should now, in hindsight, seem obvious. The terminology, unfortunately, is confusing enough that scholars and amici curiae soon zeroed in on patent grants as property—whether arguing they are “private property” or statutory grants given the “attributes of personal property,” as the 1952 statute says. That debate missed the point, though it did play to Gorsuch’s sensibilities. In their view, 200+ years of steady, quiet administrative patent grants, post-grant revisions, reissues, reexaminations, and other administrative actions by bureaucrats administering a complex system of hundreds of thousands of limited monopolies could be upended in part by a previously undiscovered fundamental right of our polity, buried deep within the Constitution, only now springing to life and importance. THE IMPORTANCE OF PATENTS

I will respond to Professor Barnett’s attempt to sow doubt in multiple areas of well-established, empirically supported observation. But let’s start with something his article lacks: context. In 1984 the U.S. Patent and Trademark Office (PTO) issued just over 50,000 U.S. utility patents. By 2014 that number was 300,000, and by 2017 it had climbed above 320,000. In the past five years the number of utility patent applications has steadily risen by tens of thousands of applications a year—from 565,566 in 2012 to 650,411 in 2017. So at least inventors still find enough value in the system to apply for and earn patents at an astonishing, historic rate, with no end to the rise in sight. (Accordingly, the PTO’s budget has risen from just over $1 billion in 2001 to close to $3.5 billion this year.) This interest is at odds with Barnett’s claim of a substantially “eroded” “property-like nature of the patent right.” If they were of such little value, why would so many be paying so much for them?

/ Regulation / 49

He compares today to an earlier “nadir” in patenting from 1930 to 1970, when he says district courts were invalidating over 70% of patents. Yet from 1930 to 1970, American industry grew mightily, making us the richest nation in the world, with the most innovative companies across industries, from pharmaceuticals and medical devices to tractors and automobiles. Or that for the past five years— during what he calls a substantial “weakening” of the patent system— the stock market has steadily risen, we have emerged from a recession, growth is positive, and unemployment has fallen to almost 4%. I am still waiting for the systemic death knell we keep being warned about to toll. I am not suggesting correlation equals causation, as does Barnett. But the success of the U.S. economy during what he suggests are the weakest “nadirs” of the patent system at least begs two follow-up questions: just what do he and others mean when they advocate for strong patents generally; and if universally strong patents are a good thing, why, and to whom? QUESTION 1: WHAT DO COMMENTERS MEAN BY “STRONG PATENTS”?

By strong patents, Barnett seems to mean making it easier to use any patent to obtain an injunction, win money damages, extract licensing fees, and withstand legal scrutiny, regardless of merit. He means making every patent more easily enforceable. He can’t mean more patents and thus more patent protection—that’s already rising. He also can’t mean pendency of patent applications or the cost of patent prosecution—both are falling. And he can’t mean incidence of patent litigation—that, too, has seen historic highs in the past five years. It’s likely he means patent valuation, which varies with what each patent owner can demand from others. Here, he’s on to something, in that the 2011 America Invents Act (AIA) has—without introducing new substantive legal grounds of patentability— made it cheaper and easier to apply preexisting legal requirements. Before the AIA, even the most improvidently granted patent would have cost at least $300,000 in legal costs and taken years to challenge. Thus, even the most worthless asset still had an inherent floor value untethered from merit—one based solely on legal leverage. Many paid rather than fight; many still do. At bottom, the AIA has done exactly what Congress intended. What Barnett and others don’t like is just that. Less litigation, costing less money, resolved more quickly. Leverage reduced to around $100,000—a fair cost of legal fees associated with the new process. When he calls for stronger patents, he means giving all patents a higher valuation, one based on how easy it is to enforce them, regardless of merit. QUESTION 2: ARE EASILY ENFORCEABLE PATENTS GOOD FOR THE U.S. ECONOMY AS A WHOLE?

Is there empirical evidence that, outside of the U.S. Food and Drug Administration framework and the Bayh–Dole Act established by Congress, easily enforceable patents are better for the


American economy as a whole? Without question, a pharmaceutical company’s publicly listed, Orange Book–listed patents are essential to its millions or billions of dollars in revenue based on their interaction with statutory FDA approval processes and the marketability of drugs. And there is, as Barnett notes, the Bayh–Dole Act of 1980, which created the ex parte reexamination scheme (which he seems to take no issue with) and made at least universities much more profitable and motivated to spin out biotechnology start-ups. Bayh–Dole, while controversial, has been hailed as one of the more effective pieces of legislation in this area at achieving commercialization of inventions. True enough. But outside of those sectors, there is very little evidence that patents generate wealth for the economy as a whole. Rather, they shift it. This seems obvious. They are, after all, a legalized, temporary monopoly—a serious restriction on free-market trade—and they do not intend to (or include a mechanism to) independently generate wealth for the country absent other statutory help. Do they generate wealth for individual companies at the expense of others? Sure. But their primary use is to concentrate wealth in the hands of the patentee for a limited period as a reward for publishing useful innovation. This can generate licensing fees for companies that may have innovated in the past but no longer turn a profit on products—i.e., it allows older companies or those without manufacturing capacity to hold on longer, to retain revenue, and in some notable cases to not manufacture anything at all. That is perfectly fine and legal as long as those patents cover what parties say they do, and as long as they were inventive when granted and remain so. Patents reallocate wealth as a reward for inventive public disclosure. Thus, they should only be granted when they produce a societal benefit that outweighs those costs, as scholars like Stanford law professor Lisa Larrimore Ouellette note. Ostensibly, American industry and innovators use patent disclosure to advance the progress of the useful arts. Monopoly causes harm to competition and thus the free market, but that’s intended where justified and part of the deal in some cases, assuming a frictionless system. But what about transaction costs? OUT-OF-PROPORTION TRANSACTION COSTS

Enter the lawyers. Patent law is unquestionably one of the highest-priced fee-for-service professions. As the American Intellectual Property Legal Society shows in its annual anonymized survey of its wide-ranging membership, litigation costs where $10–$25 million is at risk averaged $2 million in 2017, down from their peak in 2013 at $3.325 million. Professor Barnett knows this, asserting in a 2009 article that “median patent discovery and litigation costs [were then] $2.5 million and $4 million respectively.” It now costs on average $5,000–$8,000 to apply for a patent, with no guarantee of success, and many companies are prosecuting tens of thousands each year. It is a highly lucrative career, and has turned the patent bar associations into powerful, well-heeled lobbies with tens of thousands of patent lawyers invested in the continued existence of

those high costs. One can see why amici support for reversing the “pendulum” of 2011’s modest procedural reforms that reduced litigation costs overall might be plentiful and vociferous. Patent lawyers as an industry have a stake in this one, and the continued success of those reforms could hit counsel in the wallet. Not that there’s not plenty of money still to be made. Here again, context matters; in any other area of the law, $2 million in litigation expenses would be extreme. Here, that’s supposedly a cause for concern. But that is a debate for another day. What matters is the context of a system where legal fees routinely top $1 million—and where hundreds of thousands of new causes of action—new patents—issue every year. The system siphons off money for legal fees to the tune of millions of dollars a year for American businesses. And it does so, in many cases, at the behest of wrongful claimants seeking nothing more than to extract fees. It needs to be carefully policed. That’s where the real debate lies. As Federal Circuit Judge Kimberly Moore once noted, “Can the patent system flourish if the scope of the patentee’s property right is wrongly assessed one-third of the time?” A convenient new dialectic / Professor Barnett and others, playing to

the imagined conservative and neo-originalist sensibilities of Justice Gorsuch, are suddenly painting a picture of a battle between libertarian strong-patent-rights advocates versus liberal administrative wonks. But the patent system has long been, for better or worse, a complex regulatory framework where patent monopolies exist only by the grace of Congress and the executive. Until recently, strongpatent-rights advocates full-throatedly supported that framework. Consider reexamination, another post-grant procedure available since the passage of the Bayh–Dole Act in the early 1980s. It was initially proposed by patent owners as a means to fix errors in too-narrow patents. Or reissue proceedings, now nearly 200 years old, where entirely new claims can reissue out of a previously issued patent. And there are certificates of correction, derivations, interferences, inter partes reexaminations, supplemental examination, and more. None of which Oil States’ counsel had any issue with; in briefing and during oral argument, counsel conceded the constitutionality of these other post-grant procedures. Then note, as Justice Anthony Kennedy did, that Congress can change the term of patents to whatever lawmakers think is appropriate. Patent law is inextricably bound up in the administrative—i.e., bureaucratic—process of applying for and granting patents. It’s invariably a lot of paper. Once granted, patents are subjected in court to administrative deference per se—there is a presumption of validity in District Court of all patents arising from an expert agency expected to do its job. Trying to cast one administrative procedure aside as “administratively” minded and another as “property” minded is dishonest; all patents spring from a bureaucratic framework and represent a very temporary and very tightly regulated constraint on free trade. To suggest otherwise is to oversimplify. It can be both. Is property-rights erosion really happening? /

Professor Barnett claims that “it has now become popular in certain business and


policy circles to endorse so-called ‘efficient infringement’ strategies.” He provides no support for this allegation; for my part, the only place I have seen that term used is as a relatively new talking point on blogs—Google it and see for yourself—and at events as a means to rhetorically discredit further patent reform. The “use now, pay later” argument he and others make is that “efficient infringement” is premised on—and founded in—the Supreme Court’s 2006 eBay decision, which made it much harder for courts to issue preliminary injunctions, at least as interpreted by lower courts. That, in turn, changed most of patent litigation leverage into a fight over potential damages or costs. Fair enough. But assuming anyone would so wantonly infringe as a business strategy ignores Halo and the Supreme Court’s recent treble damages decisions, decisions that give district court judges wide latitude in assessing up to three times the assessed damages for willful infringement and to impose fees. If as a patent owner you notify others of infringement and they willfully continue, as long as you trust the validity of your patents, you’ve got quite a case on your hands. I don’t know a company or counsel that would sign off on that risk; there’s no way to budget for that kind of pain. That’s certainly not an invitation to efficiently infringe anything. Even if it were, Barnett’s argument here is just a criticism of the Supreme Court in eBay circa 2006, nothing more. Are “patent trolls” really a problem? /

Professor Barnett and others have recently begun suggesting that non-practicing entities (NPEs)—or pejoratively, “patent trolls”—don’t even exist. They have taken to speaking of the “patent troll narrative”—or even the inartfully redundant “false patent troll narrative”—to suggest it is a story, i.e., one made up, unsupported, and inaccurate. Yet, Congress, the Federal Trade Commission, the Department of Justice, the PTO, and the courts have written about, catalogued, and studied the problem at length. (That’s not to mention the hundreds of companies that have been targeted.) For instance, the FTC conducted an extensive, well-researched study on NPEs using its subpoena powers. It carefully documented the problem, releasing the years-long study in 2016. Barnett fails to explain or acknowledge the wealth of evidence—published, public, and obvious—demonstrating the practice. I will. While shell companies make it easy to hide ownership, nonetheless NPEs are generally pretty brazen. Some are so successful that they are publicly traded Fortune 500 companies—companies like Marathon Patent Group, Acacia Research, and others. Indeed, they are known as “publicly traded intellectual property companies” (PIPCOs). Others stay private but are nonetheless prolific, litigating through alter egos—IP Edge, Dominion Harbor, Blackbird Technologies, IP Valuation, and the many limited liability companies controlled by Leigh Rothschild, Nick Labbitt, Brian Yates, or any of the companies once associated with Erich Spangenberg of the moribund IP Nav. And then there are the Hawk Technologies of the world, which has brazenly sued 200+ entities, including grocery stores, transit authorities, municipalities, and

/ Regulation / 51

even multiple branches of the nonprofit Goodwill for their use of closed-circuit surveillance video. These firms often play name games that seem comical—until you’re facing down service. IP Edge controls or was the genesis of litigious alter egos Anuwave, Autoloxer, Banertek, Bartonfalls, Carnition, Drogo IP, eDekka, Finnavations, HelioStar, Kevique Technology, Kobace, Long Corner Consumer Electronics, Loramax, MagnaCross, Mod Stack, Mozly Tech, NovelPoint Security, Oberalis, Opal Run, Olivistar, Orostream, Peppermint Hills, Reef Mountain, Ruby Sands, Serenitiva, Somaltus, Vaultet, Venus Locations, and Wetro LAN, among others. (The Rothschild entities, for their part, are transparent enough to generally include the Rothschild name, suggesting their alter egos are created more out of a legal or fiscal strategy than as a means of obfuscating ownership). If parties were truly interested in treating patents as property, they would be advocating vigorously for mandatory recordation of assignment and transfer and further regulation of the sale and transfer of those rights, just as land, automobiles, and other types of property are heavily regulated in transfer or sale. Indeed, a voluntary database already exists, it just doesn’t punish those actors that choose to keep transfers and thus ownership secret, increasing legal cost accordingly. It avoids press and obscures what NPEs are doing. You don’t hear anyone pushing mandatory recordation. (Though recently the lack of it came back to bite Intellectual Ventures when it was revealed—after the firm filed an International Trade Commission complaint on five patents, shortly thereafter dismissed—that the firm wasn’t the true owner of the asserted assets.) They might also advocate for seeking out and punishing the worst abusers of the system—those that so clearly and demonstrably exist. They could nominate someone at the DOJ or FTC to issue a public report detailing the holdings and practices of those few that are so unquestionably abusive we can all agree we’d be better off without them. But those changes wouldn’t ease assertion. Are “bad patents” really a problem? /

Concerning “bad patents,” Professor Barnett omits discussing the many changes in invalidity law over the past 20 years. Many of the patents litigated today were granted in the late 1990s or early 2000s. Since then, many facets of patent validity have changed dramatically per Supreme Court precedent, including the legal tests for invalidity, obviousness, subject matter eligibility, anticipation, and even claim construction and the standards of validity in general. As Congress correctly noted during the passage of the AIA, many of the millions of patents that have already issued would not have issued if examined today. Without passing judgment on the propriety of these intervening rulings, it is objectively clear that some patents have issued under old standards that would not under new ones; they are thus unpatentable. But even putting that aside, with 320,000+ patents issued and more than 650,000 applications filed each year, and with grant rates that we can all agree are at least above 50% despite the quibbles Barnett raises, plenty of patents will issue that contain defects, errors,


or otherwise invalid claims. Even ostensibly “good” patents can have “bad” claims, i.e., invalid ones—which was the whole point behind creating the patent-owner-driven ex parte reexamination process. And no one, not even Oil States’ counsel, is advocating holding ex parte reexamination or the 200-year-old reissue process unconstitutional. That, in turn, means that Barnett and others aren’t upset about the fact of administrative reexamination itself—i.e., post-grant cancellation. Rather, they’re concerned with the form of it being too effective at lowering the transaction costs once associated with patent litigation. But that’s explicitly why Congress created it. “Royalty stacking” is beside the point /

Professor Barnett seeks to minimize the problem of royalty stacking by citing studies and evidence of lower single-digit royalty percentages. But multiple per-device royalties of “just” 3% or 5% on billions of dollars is a significant thumb on the scales. He also makes the correlation/ causation leap again, suggesting that because smart phone prices are dropping generally, stacking must not be a problem. It’s an argument currently spilling out in epistolary form before the Justice Department Antitrust Division. The more reasoned approach to royalty stacking is to acknowledge that there are two main academic camps arguing over the empirical evidence of royalty stacking, that those two camps tend to line up on the same side of most other issues, and that they are rather beside the point. Jorge Contreras of the University of Utah has done just that in an upcoming paper, preaching a Gordian knot approach. Noting that any individual instance of stacking or hold-up would still be wrong and worth regulating regardless of empirical evidence (or lack thereof) of a widespread problem, he suggests moderation, continued vigilance by government agencies, but certainly no activist pressure based on theories of widespread stacking problems. The middle view here might not be as dramatic or make either side happy, but that may be a sign that it’s the more reasonable approach. In biotech, “patent thickets” have never been much of a problem

/ Professor Barnett, scare quoting the well-documented “patent

thicket” problem, uses as his counterfactual the only field where volume patenting isn’t widespread (yet): biopharmaceuticals. But that phenomenon is widely understood and easily explained by the need for biopharmaceutical patents to be listed in the FDA’s Orange Book. In short, pharmaceutical patents are tightly constrained by the need for approval of the underlying drugs on which they read through rigorous scientific testing and a complicated statutory schema—the Drug Price Competition and Patent Term Restoration (Hatch–Waxman) Act and the related Biologics Price Competition and Innovation (Biosimilars) Act. Yet even that field occasionally suffers from the problem, as with certain patent portfolios surrounding drugs that now top 100 patents and counting. In the high technology area, the problem is documented and widespread. To illustrate, recently a company called Provenance Asset Group (PAG) purchased some 5,600 patent families from Nokia. PAG spent millions in legal fees analyzing the first 2,600

and proudly announced that it had found 56 that it thought strong enough to assert. The number isn’t notable; what’s notable is the millions of dollars in legal fees and hundreds of hours of lawyer time it took to get to that answer. Think about those millions in fees—and this is before any court costs picking through the thousands of legal claims—as a barrier to entry and you get why a patent thicket is such a problem. Even valueless or irrelevant assets, when grouped into a subject matter area, a standard, a portfolio, a pool, or a tranche for purchase or assertion, present a sizable barrier to challenge and a resulting inherent value. I’m reminded of a footnote in a David Foster Wallace book where he likens thousands of pages of bureaucratic public documents containing valuable, if obfuscated, information to “the giant solid-gold Buddhas that flanked certain temples in ancient Khmer. These priceless statues, never guarded or secured, were safe from theft not despite but because of their value—they were too huge and heavy to move.” With 6,000+ patents issuing from the PTO weekly and emerging portfolios and pools topping thousands of assets, the patent thicket problem explains itself. CONCLUSIONS

In short, Oil States is about administrative law; it doesn’t represent a change in patent policy writ large. If there is some broad, systemic shift in efforts at reasonable reform, then Barnett’s article, other commentary, and the intense lobbying pressure from industries that have been displaced by those reforms are what will have caused it. READINGS ■■ “A Market Test for Bayh–Dole Patents,” by Ian Ayres and Lisa Larrimore Ouellette. Cornell Law Review 102(2): 271–331 (2017). ■■ “Are District Court Judges Equipped to Resolve Patent Cases?” by Kimberly A. Moore. Harvard Journal of Law & Technology 15(1): 2–39 (2001). ■■ “Debugging Software Patents after Alice,” by Jonathan Stroud and Derek M. Kim. South Carolina Law Review 69: 1–33 (2017). ■■ “Has Patent, Will Sue: An Alert to Corporate America,” by David Segal. New York Times, July 14, 2013. ■■ “How Often Do Non-Practicing Entities Win Patent Suits?” by John R. Allison, Mark A. Lemley, and David L. Schwartz. Berkeley Technology Law Journal 32: 237–307 (2017). ■■ “Meet America’s Most Prolific Patent Troll,” by Kevin Drum. Mother Jones, Oct. 27, 2016. ■■ “Much Ado about Hold-Up,” by Jorge Contreras. Working paper, Feb. 13, 2018. ■■ “Patent Holdup, the ITC, and the Public Interest,” by Colleen V. Chien and Mark A. Lemley. Cornell Law Review 98: 1–45 (2012). ■■ “‘Patent Trolls’ and Patent Remedies,” by John M. Golden. Texas Law Review 85: 2111–2161 (2007). ■■ “Public Enforcement of Patent Law,” by Megan M. La Belle. Boston University Law Review 96: 1865–1928 (2016). ■■ “The PTAB at Five: Reduced Leverage for NPEs,” by Jonathan Stroud. IAM Magazine, November/December 2017. ■■ “The Surprising Resilience of the Patent System,” by Mark A. Lemley. Texas Law Review 95: 1–57 (2016). ■■ “Worthless Patents,” by Kimberly A. Moore. Berkeley Technology Law Journal 20(4): 1521–1552 (2005).

Robert Higgs, Founding Editor


Independent Institute 100 Swan Way Oakland, CA 94621-1428

1-800-927-8733 Phone: (510) 632-1366

54 / Regulation / SPRING 2018


The Journal through Time

But there was a downside to this advocacy. First, many of the Journal’s unsigned editorials (under the heading “Review and REVIEW BY DAVID R. HENDERSON Outlook”) and guest op-eds during the ✒ Bartley era suggested that the economic growth sparked by tax cuts would result he most widely read and probably the most influential editorial in higher federal tax revenues than if tax page in the United States is that of the Wall Street Journal. How rates weren’t cut. Reasonable back-of-theenvelope calculations showed that this did that come about? Who have been the major editors over was highly unlikely. As economist Lawthe many decades that the page has been important? What policies rence Lindsey demonstrated with a careful have the editors favored, and have their favored policies been good on examination of the data, more taxes were net for the United States and the world, interesting was the parts of the book that paid by the highest-income people, whose or bad? Finally, how sound has the Journal he devoted to the early 1970s. I started marginal tax rates were cut in the early editors’ reasoning about economic issues reading the Journal in 1972 at the sug- 1980s from 70% to 50%. But it was not true been? gestion of Benjamin Klein, one of my for taxpayers overall. For people who are interested in UCLA economics professors, and haven’t Second, because the Journal’s editors answers to those questions, I have two stopped. As it happens, one did not worry much about suggestions: read Free People, Free Markets of the biggest changes in the revenue effects of large by retired Wall Street Journal editor George the page took place in 1972: cuts in tax rates, they didn’t Melloan, and read this review. 34-year-old Bartley was choput much emphasis on proMelloan’s breezy history of the Journal sen as editorial page editor. posals for reining in federal and its various controversies over the years That was not an unalloyed government spending. Imagis entertaining and informative. Starting positive. ine, for example, that the ediin the early 1970s, he followed closely and We can thank Bartley for tors had advocated in 1972 often wrote about major developments and making supply-side economthat federal spending rise by events, including supply-side economics ics understandable, popular, 0.5 percentage points less per and tax cuts, the fall of the Soviet Union, and influential. Supply-side year than it actually did. In the various wars that the U.S. government economics, as he and other 1972, federal government got into, and last decade’s financial crisis. Journal writers describe it, is spending was $244.3 billion. Free People, Free His thoughts are sometimes insightful the idea that high marginal Markets: How the Wall In 2016, it was $3,852.6 biland it’s heartening to see how he and the tax rates discourage work, Street Journal Opinion lion. That’s a compounded Journal editors have held firm, for good saving, and investment. It still Pages Shaped America annual growth rate of 6.5%. reasons, on free trade and immigration. shocks me how little empha- By George Melloan If our imaginary editors had Also, he tells how he disagreed with his sis academic economists 368 pp.; Encounter gotten their way and federal Books, 2017 one-time boss, the late Journal editorial placed on that insight before spending had instead risen by page editor Bob Bartley, about the war on Bartley came along. Remem“only” 6% annually, it would drugs. At times, however, I was surprised ber that the top marginal tax have been $3,172.4 billion in by Melloan’s apparent ignorance of basic rate on individual income in the 1970s was 2016. The result, with taxes the same as economics; that ignorance does, though, a whopping 70%, so the idea that marginal they are, would have been a federal budget explain why the Journal’s editorial page has tax rates matter should not have been so surplus of $95.6 billion rather than the had somewhat of a tin ear on issues such controversial. actual budget deficit of $584.7 billion. as the causes of oil price increases and the The Journal’s persistent call for lower reasons for low interest rates. marginal tax rates helped strengthen Presi- Immigration, trade, and foreign policy / Two dent Ronald Reagan’s hand. From 1981 issues on which the Journal’s editors have Supply-side economics / Although Melto 1987, Reagan and Congress cut the tax been steadfastly in favor of free markets loan tells the whole history of the edito- rate paid by the highest-income people are international trade and immigrarial page (now pages), what I found most from 70% to 28%. For that, those of us tion. The Journal was a very prominent who believe in giving people incentives to supporter of the North American Free DAV ID R . HENDER SON is a research fellow with the Hoover Institution and emeritus professor of economics produce and those of us who believe that Trade Agreement and, whatever level of at the Graduate School of Business and Public Policy at the Naval Postgraduate School in Monterey, CA. He was the people should keep more of their income restrictions on immigration we have, the senior economist for energy with President Ronald Reagan’s Council of Economic Advisers. editors have always wanted less. Indeed, should thank the Journal.



Bartley once proposed the following constitutional amendment as an immigration policy: “There shall be open borders.” A quick look at the country’s immigration policy shows that we started moving away from liberalized immigration even before President Trump. By contrast, over the four decades since Bartley became editor, the United States has moved, on net, closer to free trade. Melloan is at his best when he analyzes the microeconomic details of an economy. Reporting on a late 1980s trip through the Midwest, for example, he tells of big steel companies “moaning over their losses and demanding that Congress protect them from imports.” But he then discovered “that smaller steelmakers using electric furnaces were competitive with foreign producers and were making money.” When exploring the changes in the railroad industry, he found entrepreneurs “picking up abandoned trackage and running short lines to serve local needs.” Some of my colleagues who read the Journal’s frequent neoconservative foreign policy op-eds and attacks on Ron Paul’s noninterventionist views refer to it as the “War Street Journal.” But one pleasant surprise, to me at least, was Melloan’s documentation of the Journal’s earlier opposition to—or at least criticism of—war. He quotes a prescient 1912 editorial by William Peter Hamilton that was critical of what he worried would be a major war in Europe: War is a waste. One country cannot dissipate its savings in gunpowder smoke without hurting all the rest of us. In modern conditions of easy communications and international exchange, the misfortune of one is the misfortune of all.

Even much later, the Journal remained critical of war. Melloan points out that it “opposed JFK’s plan to send American advisers to aid the South [Vietnamese].” In October 1961, he notes, the editors wrote: “Perhaps we should all realize that there are certain things that the U.S., for all its military power, cannot do. One is to reshape the nature of people’s radical values.” Even today, Melloan—expressing

his own views but probably also those of his former colleagues—opposes American politicians “setting up China as a bogeyman for their fear tactics designed to win votes.” It would be “dangerous and possibly damaging to world trade,” he writes “to put China back on the enemies list.” Unfortunately, that same sense doesn’t seem to carry over to the Middle East. The editors were cheerleaders for both recent wars against Iraq. In late January 1991, during the Gulf War, Melloan advocated “a military occupation of Iraq by the United States, Britain and France, with sufficient power to intimidate Syria and, if necessary, Iran.” And looking back on their support of President George W. Bush’s invasion of Iraq, Melloan writes, “I see very little to regret.”

“agreed-upon standards.” Those who go to prison did not, I am certain, agree with those standards. To his credit, though, Melloan points out fellow editor Mary O’Grady’s argument that U.S. intervention to suppress coca growing in Bolivia helped elect Evo Morales, a Fidel Castro admirer, as president. Melloan also points out the huge cost of the drug war to Mexico. He notes that the young Bartley was chosen as editorial page editor over the more senior and more economically literate editor Lindley H. Clark Jr. Of course, we can’t know what would have happened if Clark had become the editor. One likely difference, though, is in the choice of junior personnel. Clark likely would have picked deputies who were more economically literate than many of Bartley’s picks.

Melloan—and probably his former colleagues—opposes American politicians “setting up China as a bogeyman for their fear tactics designed to win votes.”

He argues that destroying Saddam Hussein and his government “gave the United States a position in an Arab country from which to rally other Arab states against yet another U.S. enemy with ambitions for weapons of mass destruction, Iran.” Yet the obvious counterweight to Iran in the Middle East was Hussein. Melloan writes earlier in the book about the horror of the Iran–Iraq War during the first eight years of the 1980s, but he doesn’t seem to see the connection. On one war, though, Melloan is on the side of the angels: the drug war. He notes that as early as 1972 he agreed with Milton Friedman that drug prohibition should be ended: “I took a Libertarian view.” Bartley, on the other hand, “believed that it is important to the health of a democracy to have a set of agreed-upon standards that covered human behavior.” A ban on drugs, claimed Bartley, was such a standard. The obvious counter, which unfortunately Melloan doesn’t give, is that threatening people with prison for using, producing, or selling drugs has nothing to do with

/ Regulation / 55

Flubs / An issue on which Journal editors have exaggerated for decades is the link between monetary policy and exchange rates on the one hand and oil prices on the other. It’s clear that, with everything else held constant, printing more money will cause the value of the U.S. dollar to fall and the price of oil in dollars to rise. But Melloan continues the Journal tradition of writing as if the chief driver of higher oil prices is loose monetary policy and the resulting fall in the value of the dollar. “The price of oil soared” after 2001, he writes, because of the dollar’s loss of purchasing power in international markets. That’s partly true. But there’s an easy way to see just how much, and it appears that editors who write in this vein have never done it: compare the increase in the price of oil in dollar terms and the increase in the dollar value of foreign currency over that same period. From 2001 to 2008, the price of crude oil rose from $23.12 to $94.10—an increase of 307%. Over those same years, the price of the euro, the world’s other main currency, rose from 90¢ to $1.47, an increase of 63%. Clearly, other factors were more important to oil’s rise than exchange

56 / Regulation / SPRING 2018 IN REVIEW

rates. In oil markets as in other markets, the likely main causes of price increases, as energy economists know, are increases in demand or decreases in supply. Another important example of lack of numeracy is on economic growth rates. Melloan writes that the 1970s were somewhat of a lost decade for the United States, in part because of slow growth. Like many observers, he probably is not aware that the average annual growth in real GDP in the 1970s was a healthy 3.2%, which is exactly what it was in the 1980s. Also, Melloan attributes the low interest rates of last decade to the Federal Reserve’s

monetary policy. But the real cause of the low interest rates—as former Federal Reserve chairman Ben Bernanke recognized and as Jeffrey Hummel and I described in “Greenspan’s Monetary Policy in Retrospect” (Cato Policy Report, November 2008)—was a surge of saving coming from Asian countries and elsewhere, resulting in more money available for lending and investment. Despite these weaknesses, Free People, Free Markets is a valuable resource for those who want a better understanding of the world as the Journal editors saw it and an account of the Journal’s role in changing that world.

Considering the UBI ✒ REVIEW BY GREG KAZA


he idea of a universal basic income (UBI)—a government subsidy of at least a subsistence income for all citizens regardless of need—has long found support on the edges of the political left. But in recent years it has begun to receive mainstream interest. Hillary Clinton, the Democratic Party’s 2016 presidential

nominee, noted in her campaign memoir What Happened (Simon & Schuster, 2017) that she seriously considered embracing the UBI during her campaign. Some financial and business leaders are now advancing the issue. For instance, bond manager Bill Gross has argued, “If income goes to technological robots whatever the form, instead of human beings, our culture will change and, if so, policies must adapt to those changes.” Entrepreneur Elon Musk also cites the prospect of major future job losses from automation as a reason for adopting a UBI, while Facebook CEO Mark Zuckerberg promoted the idea to Harvard University’s 2017 graduating class, saying, “Now it’s time for our generation to define a new social contract.” Interest in the UBI is not limited to the center-left. Republican Richard Nixon proposed a quasi-UBI Family Assistance Plan in 1969 that also had a non-UBI work GR EG K A ZA is executive director of the Arkansas Policy


requirement. The idea went dormant after Democrat George McGovern proposed a $1,000 per-person UBI as part of his disastrous 1972 presidential campaign, but Republican Alaska Gov. Jay Hammond played a key role in that state’s 1976 creation of its Permanent Fund, an oilrights-subsidized UBI. More recently, the American Enterprise Institute’s Charles Murray has proposed replacing U.S. welfare programs with a UBI. In this new book intended to garner support for the UBI from beyond the political left, authors Philippe Van Parijs and Yannick Vanderborght invoke an even more impressive would-be supporter: Friedrich Hayek. Hayek called for a minimum income while rejecting other welfare state components. “Like the business leaders who have come out in favor of basic income,” Van Parijs and Vanderborght write, “many [classical liberals] are attracted to basic income because of its simple, non-bureaucratic, trap-free,

market-friendly operation, which helps make generous transfers more efficient and sustainable.” They further write: Anyone doubting the power of utopian thinking would be well advised to listen to one of the main intellectual fathers of the “neoliberalism” that has been declared triumphant these days by its friends and even more by its foes.... The lesson Hayek learned from the socialists, we must now learn from him.... The freesociety utopia we need today ... must be a utopia of real freedom for all that frees us from the dictatorship of the market and thereby helps save our planet.

Parijs and Vanderborght concede that “sympathy and support” for a UBI is “most generously and most consistently” held by political Greens. Their book uses Green ideology at times. For instance, in Chapter 1 they write, “The conjunction of growing inequality ... automation, and a more acute awareness of the ecological limits to growth [has] made [the UBI] the object of unprecedented interest throughout the world.” They cast support for UBI in terms of “hope in the future of our societies” and “the future of the world.” At times, such rhetoric detracts from the work’s scholarly aim, especially such sentences as: “In addition to visionaries, activists are needed ass-kickers, indignados, people who are outraged by the status quo or by new reforms or plans that target the poor more narrowly, watch them more closely, and further reduce the real freedom of those with least of it.” Explaining the UBI / The authors are more precise defining the UBI: “paid in cash rather than in kind; individual entitlement, as opposed to ... [a] household situation; universal, as opposed to ... an income or means test; [and] obligation free, as opposed to [a tie] to an obligation to work or prove willingness to work.” In short, the UBI is “individual, universal, and obligation-free.” The book provides a good overview of the topic, discussing traditional alternatives to the UBI such as public assistance


and social insurance. They describe the UBI as “a radically distinct” third model, offer a moral case for it, and tackle such issues as funding and affordability and the idea’s political prospects. They also make some surprising claims along the way, including their assertion that “the rich are entitled to [a UBI subsidy] just as much as the poor.” The UBI’s intellectual history is traced to the late 18th century. Thomas Paine’s 1797 pamphlet Agrarian Justice proposed to “create a national fund, out of which there shall be paid to every person, when arrived at the age of twenty-one years, the sum of fifteen pounds sterling, as a compensation in part, for the loss of his or her inheritance, by the introduction of the system of landed property.” In sharp contrast to John Locke, Paine asserted of the UBI, “It is not charity but a right, not bounty but justice, that I am pleading for.” French social thinker Charles Fourier (1772–1837) was a proponent of a government-guaranteed basic income subsidy “targeting the poor: obligation-free but not universal.” More proponents emerged in the 20th century. Bertrand Russell’s “‘vagabond’s wage’ was sufficient for “existence but not for luxury.” British social activist Major C.H. Douglas (1879–1952) argued that “social credit mechanisms” could pay all households a monthly “national dividend.” Infamous Louisiana populist Huey Long advocated a “Share Our Wealth” program that would have operated like a UBI. More recently, economist Robert Theobold (1929–1999) advocated a “guaranteed income” because automation is making workers “redundant.” The welfare state may diminish the UBI’s political appeal, making it appear to be just another government handout. The authors argue means-tested welfare state components like Social Security and unemployment insurance are “a safety net that fails to catch a great many people it should catch, and in which others get trapped.” In contrast, the UBI “provides a floor on which they all can safely stand.” Today, the UBI competes with other welfare ideas: a one-time basic endowment payment, a negative income tax for low- and

/ Regulation / 57

but to liberate us all.” The authors present a moral case for the UBI on the basis of distributive justice, which they define as “the just distribution of entitlements to resources among the ■■ “The basic endowment is about equalmembers of a society.” They answer objecizing opportunities at the start of tions, including concerns that a UBI would adult life, while basic income is about encourage idleness by removing the work providing economic security throughincentive of basic survival, but struggle to out life.” convincingly rebut objections raised by phi■■ Because of “the need to switch back losophers John Rawls and Ronald Dworand forth between different adminiskin. Rawls used the stereotype of the slacker trative statuses of claimant or worker, Malibu surfer who would rather spend the a negative-income-tax day on his surfboard than scheme presents the laboring for money, arguing same intrinsic defect as that such folks “must find a standard means-tested way to support themselves and schemes.” would not be entitled to pub■■ The EITC “has the lic funds.” Dworkin similarly obvious disadvantage of dismissed such idle people doing nothing for the as “scroungers.” If Rawls and jobless.” Dworkin had such concerns, ■■ “If busyness is all that Van Parijs and Vanderborght matters, wage subsidies are unlikely to gain support are definitely superior to from people whose political an unconditional basic thinking is more inclined to Basic Income: A Radical Proposal for the views of Robert Nozick, income. For those committed to freedom for all, a Free Society and a who would have criticized the Sane Economy however, the opposite is UBI as an illegitimate state By Philippe Van Parijs clearly the case.” acquisition of property. The and Yannick authors also fail to explain Vanderborght Justifying the UBI / Pro-UBI how their abstract theory can 400 pp.; Harvard arguments include “people University Press, 2017 achieve a critical mass of supcan take jobs or create their port among citizens—many of own jobs with less fear;” whom would have the same “earnings … increase net incomes,” and “idleness” concern—or the public officials women would benefit because they “cur- needed to enact a UBI. rently participate to a lesser extent in the The authors concede the negative labor market and their average hourly income tax (NIT) has political advantages wage is below that of men.” The UBI over the UBI. Milton Friedman wanted should not be “misunderstood as aiming an NIT, but wanted it set “low enough” to equalize outcomes or achievements. so that taxpayers would be willing to pay Rather, it aims to make less unequal, and the bill and give individuals harmed by distribute more fairly, real freedom, pos- the welfare state “a substantial and consissibilities, and opportunities.” tent incentive to earn their way out of the Drawing on utilitarianism, they argue program.” Elsewhere, they acknowledge the UBI is “conducive to greater happi- “skepticism about the potential political ness.” Unfortunately, they follow this with support” for a UBI. They cite NIT experia bout of unsatisfying rhetoric, claiming ments in New Jersey and Pennsylvania the UBI “does not operate at the margin (1968–1972); Iowa and North Carolina of society but affects power relations at its (1970–1972); Gary, IN (1971–1974); Seatvery core. Its point is not to soothe misery tle and Denver (1970–80); and Manitoba non-earners, an earned income tax credit for only low earners, wage subsidies, guaranteed employment, and work reduction. The authors criticize these competing ideas:

58 / Regulation / SPRING 2018 IN REVIEW

(1970s), observing, “There are now over fifty countries with sovereign wealth funds similar to the Alaska Permanent Fund. Yet, despite various proposals, Alaska’s dividend scheme remains unique so far.” But they don’t consider the possibility governments put their own self-interest ahead of individual citizens. The landslide defeat (76.9% – 23.1%) of a proposed Swiss UBI in a 2016 referendum suggests politics are not on the side of the UBI, at least for now. As for the size of UBI payments, the authors suggest “picking an amount on the order of one fourth of (a country’s) current GDP per capita,” though, “there is nothing sacrosanct about 25 percent.” Their proposal would equal $1,163 per month in the United States (as opposed to $130 in India and $16 in Congo). Proponents would fund the UBI through taxation, though no major political party in the industrialized world has embraced

this radical idea—at least not transparently. However, some UBI programs have been attempted with nonprofit or private funding. Pierre Omidyar, e-Bay co-founder, is donating nearly $500,000 for a Kenyan UBI experiment. The authors discuss initiatives such as a UBI operated by MeinGrundeinkommen, a German nonprofit; and a German United Evangelical Mission monthly UBI project in Otjivero, Namibia. Conclusion / The UBI is gaining interest in the American political discussion, but on a very superficial political level. In my opinion, Van Parijs and Vanderborght’s book does not make a convincing argument that government UBI programs are needed. It also does not devote enough attention to privately funded experiments. These are the more important development in the UBI and they should be closely followed.

Becoming More Sympathetic with ‘Black Lives Matter’ ✒ REVIEW BY DWIGHT R. LEE


n this book, James Forman Jr., a Yale law professor, distills his experiences as a public defender in Washington, DC and a co-founder of a D.C. public charter school, as well as his knowledge of politics, the drug war, and the criminal justice system, to tell a sad story. It is a story with few heroes, many victims, and nuances that don’t fit well into competing political narratives. Forman begins by telling the story of a black juvenile he represented who was convicted on relatively minor gun and drug charges. He had no previous arrests, yet he was sentenced to six months in a detention facility that “everybody knew … was a dungeon,” instead of receiving probation and the chance to remain in school. Forman couldn’t help noticing DW IGHT R . LEE is a senior fellow in the William J. O’Neil

Center for Global Markets and Freedom, Cox School of Business, Southern Methodist University. He is a co-author of Common Sense Economics: What Everyone Should Know about Wealth and Prosperity, 3rd edition (St. Martin’s Press, 2016), with James Gwartney, Richard Stroup, Tawni Ferrarini, and Joseph Calhoun.

that everyone in the courtroom was black, including the judge, the prosecutor, and court reporter, as well as the arresting officer. For that matter, so were the police chief, mayor, and the majority of the city council that wrote the gun and drug laws that his client was convicted of violating. Forman recognizes the progress blacks have made since the 1950s, but “progress wasn’t the whole story.” In 1954, about one-third of the nations’ prisoners were black. The number was approaching 50% by 1994. The increase occurred as the political influence of blacks increased, especially

after the passage of the Voters Rights Act of 1965. Forman is not surprised by this because a 2014 survey of Americans found that 64% of blacks thought the courts don’t deal “harshly enough with criminals.” (The comparable portion for whites was 73%.) He also recognizes that the consequences he regrets are the result of policies made by people trying to save communities that “seemed to be crumbling before their eyes.” There was a dynamic in play that “drove elected officials toward a toughon-crime stance in some predicable ways.” Yet, nobody has a sense of responsibility for the consequences “because nobody is responsible.” So, “even reluctant or conflicted crime warriors … become part of the machinery of mass incarceration [that] … continues to churn even to this day, when its human toll has become increasingly apparent.” Forman does an impressive job explaining reasons why the tough-on-crime political responses led to tragic human costs in black communities that continue today. He makes a case for his views that I found deeply convincing and often very touching. I do have some disagreements with a couple of his points, but they are peripheral to his arguments. I applaud him for writing a book that has increased my understanding of a serious problem and made me more sympathetic with the problems so many in the black community must deal with, including many of the black males sentenced to prison. Forman’s story unfolds primarily in D.C., with relevant highlights from other major cities with large black populations.

/ As in other large cities, blacks in D.C. are no more likely to use illegal drugs than whites living in middle-class neighborhoods. But blacks’ incarceration rates for marijuana possession are far higher than whites, with the gap not explained by more blacks being engaged in street-level distribution. Because of this gap, David Clarke, one of two white members of D.C.’s first city council, saw an opportunity in 1975 to

Appeal of prohibition


/ Regulation / 59

promote civil rights and racial justice by wide, the homicide rate was seven to 11 the history of Jim Crow violence and were greatly reducing the fine and eliminating times greater for blacks than for whites. strengthened by such outrages as the assasprison sentences for possession of less Some 85% of the victims killed by guns in sination of Martin Luther King and other than 2 oz. of marijuana. There was resis- D.C. were black. civil right leaders. But the hope that gun tance, but passage looked likely except Empathy for the victims’ families was control in D.C. would reduce black-onfor the city’s long and troubled history black killing resulted in with heroin. From the early 1960s to 1969, a 12–1 city council vote D.C. politicians favored prohibition the percentage of new inmates in D.C.’s for strengthening restriccorrections system who were addicted backed up with harsh penalties “even tions on guns in 1976. to heroin increased from less than 3% to when the punitive measures ... did not Tougher penalties for 45%, and they were overwhelmingly young achieve the desired results.” violating those restricblack men. tions were postponed One response was to provide “methauntil 1979 because of a done maintenance” for addicts. This was temporary moratorium opposed, however, by those who objected accompanied by outrage against the mur- on the city council’s ability to change the to “masses of black citizens strung out— derers and criticism of judges who handed city’s criminal law. and completely dependent—on govern- down light sentences for violating gun laws. In the debate over both marijuana ment narcotics.” Many black activists in Harsh prison sentences were demanded for and gun control, D.C. politicians favored D.C. “believed that whites wanted blacks possessing or selling a gun, or committing prohibition backed up with harsh pento be addicted to narcotics, because it a crime in possession of one, without the alties “even when the punitive measures made them passive; … [and] methadone possibility of plea-bargaining. D.C. advo- adopted in D.C. and elsewhere did not maintenance was a thinly veiled attempt cates of these longer sentences knew they achieve the desired results.” In both cases, to keep black people oppressed” (Forman’s would be served mostly by blacks. But they “the majority of those punished have been emphasis). felt this would prevent the low-income, poorly educated black men.” Despite the opposition small minority of criminals And the result was a failure “to prevent of local black pastors and from terrorizing the majority marijuana use [or] protect the community others who “insisted that of law-abiding black citizens. from gun violence.” [marijuana] was a gateway Yet, many blacks had an to harder drugs,” Clarke’s opposing view. Doug Moore, Black police are still police / Obviously, proposal passed on the first an influential member of the not all killing in black communities were vote. It required a second D.C. council who opposed blacks shooting blacks. While fewer in majority, however, to become reducing penalties on number, blacks were being killed by the law. It was at this point that marijuana possession, also police, who until the 1960s were almost the clergymen “turned up opposed gun restrictions. entirely white. Many of those killings were the heat.” They rallied, 150 His opposition was based on clearly unjustified (e.g., the victim was jaystrong, at the District Build- Locking Up Our Own: black history. Moore argued walking) and created the same resentment ing, targeting Council Chair- Crime and Punishment that blacks need guns not against police that they do today. In response, blacks made three arguman Sterling Tucker, who in Black America just for individual protection, needed their support to stay By James Forman Jr. but also as “a tool of collec- ments for hiring black police officers. in office, and Mayor Walter 320 pp.; Farrar, Straus, tive self-defense against vio- First, they would be more trusted in black Washington, who could veto and Giroux, 2017 lent whites.” Coleman Young, communities and more likely to protect the bill even if it passed the Detroit’s first black mayor, blacks. Second, they would be more council. It worked. Tucker agreed. He proclaimed, “I’ll respectful and less likely to use unnecestabled Clarke’s bill right before the sec- be damned if I’ll let them collect guns in sary force. Third, training and trusting ond vote was scheduled, effectively kill- the city of Detroit while we are surrounded blacks to use the police power would send ing it. by hostile suburbs and the whole rest of a vital message to both blacks and whites. Paralleling the D.C. debate over penal- the state … where you have vigilantes prac- Forman takes us through the history of ties for marijuana possession was a debate ticing in the wilderness with automatic the gradual increase in the number of black officers, featuring Burtell Jefferson, over gun control laws. Black-on-black vio- weapons.” lence was imposing a horrifying cost on Forman writes that to “modern ears who became D.C.’s first black police chief the black community in D.C. and other [those] claims may sound outlandish. in 1978, and Atlanta’s hiring of eight American cities in the mid-1970s. Nation- But they shouldn’t.” They were rooted in black police officers in 1948, with Martin

60 / Regulation / SPRING 2018 IN REVIEW

Luther King Sr. playing a key role. Black police were hired, but they were subjected to much the same discrimination as other blacks at the time. Forman tells us about separate and poorly equipped police stations for black officers, of them being required to enter though the back door of integrated stations, being assigned to foot patrols rather than allowed to use police cars, being forbidden to arrest white suspects, being denied promotions based on “suitability interviews” even when they scored well on written exams, etc. Over time, such blatantly discriminatory treatment of black policemen slowly eroded. But so did the hope that black policemen would act differently than white policemen. “A surprising number of black officers simply didn’t like other black people—at least not the poor blacks they tended to police,” Forman writes. Even those who considered themselves “concerned about protecting black neighborhoods … freely admitted to being markedly more aggressive about responding to such low-level infractions as drunkenness and loitering.” Black officers often expressed being embarrassed by the behavior of black offenders. In part, according to Forman, this conduct of black officers “reflected class divisions within the black community.” He writes of his experience as a cofounder of a D.C. public charter school attended by many students who “had lost parents, friends, and siblings to violence, addiction, and prison.” The students (all black) “were routinely subjected to verbal abuse, stopped and searched for drugs or weapons, or even punched, choked, or shoved” by police officers (mostly black) without any rationale that Forman could see. He saw these abuses as “part of a larger pattern” reflecting that the police “had been trained to act like warriors.”

spawned violent drug markets the likes of which American cities had never seen. In their fight for territory, heavily armed gangs turned urban neighborhoods into killing fields. … The menace crack presented in turn provoked a set of responses that helped produce the harsh and bloated criminal justice system we have today … enshrining the notion that police must be warriors, aggressive and armored, working ghetto corners as an army might patrol enemy territory.

Blacks were disproportionally homicide victims when the crack carnage began, but the new drug made things much worse. According to the U.S. Justice Department, by the mid-1990s there was a 1:35 lifetime probability of a black American male being murdered, compared to a 1:251 probability for a white American male. The demands for something to be done “seemed to come from everywhere,” but nowhere more vociferously than from “once-vibrant [black] communities [that] had been devastated.” The responses of black politicians and leaders were predictable. Despite overcrowded jails, D.C.’s mayor Marion Barry vowed, “We will find space to put those gun thugs and drug thugs who get convicted of carrying guns and selling drugs.” Atlanta’s mayor Maynard Jackson warned, “[If ] a drug or gun sale resulted in a death, the seller deserves to roast or fry.” While serving as U.S. attorney for D.C., Eric Holder’s answer for reducing gun violence was, “Stop cars, search calls, seize guns,” and he initiated Operation Ceasefire to do exactly that. Knowing better than to inconvenience the politically influential, he exempted from the operation D.C.’s second district, where most of the city’s movers and shakers lived. Unfortunate consequences and root causes

Predictable responses

/ As part of that

description, he powerfully writes of what happened in “the late 1980s, when a terrifying new drug—crack cocaine— invaded Americas ghettos.” According to Forman, crack

/ Black leaders knew that the worst consequences of the war on drugs in D.C. and many other large cities were imposed primarily on the poorest people in black communities. Obviously, this was true of the killings and crime, but Forman is also deeply

troubled by the lives that were destroyed or almost destroyed by the harshness of D.C.’s criminal justice system on blacks who violated laws that were commonly violated with impunity by people in middle- and upper-class neighborhoods. This was also recognized by black leaders. Holder was acknowledging the obvious when he said of Operation Ceasefire that the “people who will be stopped [and arrested] will be young black males, overwhelmingly.” Forman ends his book with mild optimism by mentioning the recently declining crime and murder rates, more lenient drugs laws, and the increased concern over the disproportionate attention the criminal justice system gives to poor black males. He comments throughout the book on the importance of measures to reduce the root causes of crime, such as poverty, joblessness, and poor education. Unfortunately, he devotes little attention to how best to address these root causes beyond occasional expressions of support for such policies as more federal spending on welfare programs and a “Marshall Plan” for cities to promote urban revitalization. There is no suggestion that these programs might be worsening the root causes, if not outright creating some to them. He recognizes the lack of jobs in ghettos and points out that “a young black man without a high school diploma is more likely to be in prison or jail than to be employed in the paid labor force.” Yet he ignores the negative effect of occupational licensing and minimum wage laws on job opportunities for young people from deprived backgrounds, He also does not mention the advantages of school choice in improving the educational opportunities for young people—a surprising omission for someone who co-founded a charter school in D.C. I am fully prepared to forgive Forman for those omissions, however, because they are likely the result of his experiences and understandings being different than mine. It is because of those differences that I admire Locking Up Our Own and I learned so much from reading it. I recommend it with enthusiasm.


The Pope and the Markets ✒



uppose that you hear someone declare that capitalism produces “an economy of exclusion and inequality” and that “such an economy kills.” You might well decide that there’s no point in arguing with someone who so embodies Mises’ “anti-capitalist mindset.” The world is full of such people and they seldom show any

openness to counterarguments. Peronist rule. In his foreword to the book, The speaker in question, however, is the late Michael Novak observes: Pope Francis, whose pronouncements on As the twentieth century began, a wide range of issues command attention Argentina was ranked among the top simply because of his position as leader fifteen industrial nations, and more of of the world’s one billion Roman Cathoits wealth was springing from modern lics. His writings, especially the encyclical inventions rather than farmland. Then Laudato si, put him far into the “progresa destructive form of political economy, sive” camp, and those who seek to further just then spreading like a disease from expand the state’s control over economic Europe—a populist fascism with tight life find his views to be useful. government control over the econStill, the pontiff has called for dialogue omy—dramatically slowed on the issues of poverty, conArgentina’s economic and sumerism, environmentalpolitical progress. Instability ism, business, the family, in the rule of law undermined and so on. Taking him at his economic creativity. Inflation word, the Independent Instiblew to impossible heights. tute has produced a book Rather than grasping that responds to his call and the connection between his beliefs. Edited by Wake economic suffering and the Forest University economist dirigiste policies of the govRobert Whaples, Pope Francis erning regime, most Church and the Caring Society offers leaders looked at places like seven fine essays by scholars Argentina and condemned who share the pope’s Chris- Pope Francis and the what was left of laissez faire. tian convictions but disagree Caring Society Collectivism was in the ascenwith his ideas on how best to Edited by Robert M. Whaples dancy in the first half of the advance them. 234 pp.; Independent twentieth century and the Bergoglio’s story / If those of Institute, 2017 Catholic Church was seduced us who favor a minimalist by it. Three years before Peron state want the pope to underseized power in Argentina, stand us, it’s important that we under- Pope Pius XI wrote in his 1931 encyclical stand him. His ideas may be mistaken, Quadragesimo anno, “The right ordering but he holds them for a reason. His back- of economic life cannot be left to a free ground has a great deal to do with that. competition of forces. From this source, Born Jorge Bergoglio in Argentina in as from a poisoned spring, have originated 1936, Pope Francis’s views were shaped by and spread all the errors of individualist that nation’s unhappy experience under economic teaching.” That assault on economic liberalism was the foundation for GEORGE LEEF is director of research for the James G. Martin Center for Academic Renewal. the Church’s social teaching for decades

/ Regulation / 61

until Pope John Paul II had some good words for market competition and the pursuit of profit in the 1980s. The future Pope Francis thus grew up believing that capitalism was the big problem. It causes a huge gap between the few rich and the many poor and it leads people astray with the “consumerist” impulse. In 1998, he compiled a book following the visit of John Paul II to Cuba in which he declared, “No one can accept the precepts of neoliberalism and consider themselves Christian.” Upon becoming pope in 2013, Francis did not retreat in the least from his longstanding opposition to economic liberalism. As Pepperdine University economist Andrew Yuengert writes in his chapter, “Francis’ account of markets is entirely negative: a healthy social order must put markets in their place, reducing their outsized influence on consumer choices, government policy, and labor markets.” Francis has, Yuengert notes, injected a previously unknown cynicism into the debate over the free market by questioning the motives of market proponents. He casts arguments for competition as “mere cover for exploitation.” The people who argue for laissez faire cannot be trusted because they have been warped by greed. One must wonder how serious the pope really is about engaging in dialogue with those who disagree with him when he impugns their motives. In his essay, Acton Institute scholar Samuel Gregg writes that Pope Francis reflects the “us versus them” politics of Peronism and adds that Argentina’s miserable attempts at economic liberalization in the 1990s no doubt further soured him on pro-market thinking. While the pope repeatedly attacks laissez faire concepts, he apparently finds nothing to reproach in progressivism. Gregg writes, “In the numerous addresses, press conferences, and interviews Francis has given since becoming pope, it is difficult to find any criticism of left-populist policies that comes close to matching his impassioned denouncements of market economies.” Gregg is disturbed by the pontiff’s deep hostility toward people who argue that what

62 / Regulation / SPRING 2018 IN REVIEW

the poor need is less government help, not more. Business interests and market advocates, in Francis’s eyes, “are dishonest and offer only sham arguments and slanted analysis.” How, Gregg asks, can he expect to have true dialogue with those whose integrity he has repeatedly and publicly attacked? Moreover, by demonizing business, the pope increases the likelihood that revolutions such as we have seen in Bolivia and Ecuador will completely shut out market competition and saddle the people with thoroughly dictatorial regimes.

/ Business and economics professor Gabriel Martinez of Ave Maria University quotes from Francis’ encyclical Evangelii gaudium to show that he knows the political terminology and can use it as well as any leftist politician:

Poverty and capitalism

Some still defend theories of “trickle down” which suppose that all economic growth, favored by market freedom, manages to provoke by its own power greater equity and social inclusion in the world. This opinion, which has never been confirmed by the facts, expresses an artless and naïve trust in those who hold economic power and in the sacralized mechanisms of the ruling economic system.

Martinez contends, however, that this passage should not be read as showing the pope’s implacable hostility to economic freedom, but rather that he opposes the use of market theory “to justify indifference”—the view that “eventually the poor will be alright if we leave then alone; the market will take care of them.” Martinez argues that Francis is concerned that if markets are not kept under control, the result will be control by “the winners,” which is to say that the masses will be kept under the heels of economic oligarchs. (In that, he sounds much like Franklin Roosevelt.) What the pope evidently doesn’t understand, and unfortunately Martinez fails to stress, is that the solution to the problem of domination by business oligarchy is not a powerful state, but a state where the laws and the people

sor A.M.C. Waterman of the University of Manitoba examines the pontiff’s “green” positions. Francis blames pollution on a “throwaway culture” and advocates reliance on renewable energy sources and recycling. He also rails against “disproportionate” growth The solution to business oligarchy is not of cities. To ease pressure a powerful state but a state where the on the environment, the laws guard against the abuse of govern- pope wants people to mental power for private ends. adopt a new lifestyle; once we are free from “the obsession of consumption,” we will supMemorial Foundation research associate posedly enjoy cleaner and happier lives. Waterman examines various claims the Hayeon Carol Park push back strongly against the pontiff’s belief that a strong pope has made where he undermines his state is necessary for the poor to advance. arguments with hyperbolic rhetoric. For Infinitely better than a redistributive state, example, Francis complains that many of they argue, is a state that gets out of the the world’s poor suffer from the lack of clean way of wealth creation and private redistri- water (which is true), but blames “the deified bution. Further, voluntary charity is mor- market” for it. Waterman responds, “Econoally worthy conduct, which cannot be said mists know that it is not ‘despite its scarcity’ but precisely because of that scarcity that water of coercive welfare systems. They write: should be ‘subject to the laws of the market.” Voluntary giving is not the charitable He asks the pope to consider that capitalism “giving” the pope often speaks of. The is capable of providing the infrastructure pope instead emphasizes governmental for water that the poor in Africa, Asia, and redistribution and a large role for interSouth America so desperately need. national organizations in facilitating Philip Booth of the Institute of Ecotransfers. Unfortunately, the approach nomic Affairs argues that Pope Francis he advocates generally results in more misses the importance of property rights human suffering, not less, thus underin alleviating poverty and solving consercutting his call for help for the poor. vation problems. He wants the pontiff to McQuillen and Park go further in criti- learn more about the benefits of laissez cizing the pope’s stance in favor of redistri- faire, writing: bution by noting that he never identifies Economies broadly based on the prinactual programs that succeed in bringing ciples of economic freedom and private about social justice. Francis is guilty, they property are more likely to prosper. And maintain, “of the vice of vagueness, which as countries become more prosperous, is no substitute for knowledge and leaves they tend not only to adopt technolothe pope espousing nothing but what he gies that are less resource intensive per sees as good intentions.” They point the unit of gross domestic product, but also pontiff to scholars such as Peter Bauer and to value environmental goods more. Dambisa Moyo who have made strong Booth also suggests that Francis familcases against reliance on government and iarize himself with the work of the late international aid programs. Nobel laureate Elinor Ostrom, who argued Property and the environment / Pope Franthat communities are well able to create cis has also had much to say about the systems from the “bottom up” to deal with environment. Emeritus economics profes- social and environmental problems. guard against the abuse of governmental power for private ends. In their chapter “Pope Francis, Capitalism, and Private Charitable Giving,” Independent Institute senior fellow Lawrence McQuillen and Victims of Communism


Conclusion / The book’s final chapter, by Allan Carlson of the International Organization for the Family, examines the pope’s declarations about the economics of the family. Again we find that Francis can’t resist attacking capitalism, stating that the social degeneration of the family begins “when human beings tyrannize nature, selfishly and even brutally ravaging it.” Carlson writes, “Francis despairs over consumerist cultures that pressure young people ‘not to start a family’ by simultaneously denying them stable economic ‘possibilities for the future’ while presenting them with too many options.” Carlson takes the pope to task for paying no attention to the need for property

rights if families are to enjoy security and well-being. That’s right, but I think Carlson ought to have devoted a few paragraphs to the harm that government welfare programs have done to the family. Will this book have any effect on the renewed leftward drift of the Roman Catholic Church? Efforts at getting Church leaders off the belief that statism is necessary for a good, fair society have been ongoing for decades with little apparent success. Still, because Pope Francis has said he wants dialogue, we perhaps have a unique teachable moment. Congratulations to the Independent Institute for an admirable effort to take advantage of this moment.

Defending the Free Market from Laissez-Faire? ✒ REVIEW BY PHIL R. MURRAY


n his new book, Bryant University economist Joseph Shaanan explains that free-market advocates laud “a market or decentralized economic system where market forces determine prices and quantities for products and services. All this is done without coercion and without barriers to entry.” But, he adds, disagreement exists over the nature of coercion and competition in free markets. He then describes “market fundamentalism” as a collection of “unsubstantiated beliefs associated with laissez faire such as the idea that markets (or the invisible hand of the market) can handle all economic issues without government’s help.” In this, he reveals his thesis: “The purpose of the book is to debunk extreme and unfounded assertions attempting to equate the free market ideal and its beneficial properties with actual markets and the economy.” His gripe is with what he calls “contemporary laissez faire,” which to him is different from a free market. Market power / Shaanan begins each chapPHIL R . MUR R AY is a professor of economics at Webber International University.

ter by presenting a myth that is often professed by laissez-faire enthusiasts. Take “Myth 1: America Has Free Markets.” That is a myth because of market failures and “giant corporations,” he explains. And on this point he is right; market power is one type of market failure. It is undeniable that many firms in the real world have the ability to restrict output and set price above the marginal cost of production. This is undesirable in the sense that the level of output on the market will be below the efficient level where consumers’ marginal value of another unit equals the marginal cost of production. To Shaanan, that is not a free market. Likewise, the presence of big business renders a market unfree. He decries “powerful bureaucratic centers engaged in economic planning”—meaning corporations that “are

/ Regulation / 63

hardly the epitome of a free market.” Shaannan emphasizes the ability of big business to undermine consumer sovereignty. Marketing departments influence consumer behavior through conventional advertising and a new ploy of “muddying or blurring the difference between branding and other aspects of life.” Corporations also weaken a consumer’s judgment from the supply side. “Quite often,” for example, “food choices are skillfully manipulated through careful applications of precisely calculated doses of salt and sugar, at times, intended to addict; somewhat at odds with the spirit of consumer sovereignty.” He documents myriad ways in which corporate practice proves that “we do not have free markets of the Adam Smith variety.” This reviewer is not an Adam Smith scholar; however, I suspect that the founder of economics railed against merchants in general and the East India Company in particular not so much because they were big businesses but because they benefited from “preference” or “restraint” granted by government.

/ It is conventional to think of laissez faire and the free market as complementary. If laissez faire is a government policy whereby government officials refrain from interfering with the plans of market participants, then—the standard thinking goes—markets will be free. However, Shaanan argues, such thinking can harm truly free markets. Consider his “Myth 10: Free Market and Laissez Faire Are the Same.” He writes, “Laissez faire economists usually oppose government antitrust intervention even when used to stop anticompetitive practices and strengthen markets and competition.” It is reasonable for Shaanan to characterize free-market economists as critical of antitrust policy. He is aware of Joseph Schumpeter’s point that monopolists who innovate should not be discouraged, and Richard Lipsey and Kelvin Lancaster’s theory of the second best, though those contributions are relegated to an endnote. In the body of the text, he claims that

Darwinian jungle

64 / Regulation / SPRING 2018 IN REVIEW

free-market economists “justify their opposition to antitrust laws with the argument that such government actions violate the requirement that no coercion be involved in a free market.” That statement underrates the aforementioned critiques of antitrust policy and makes one wonder what’s wrong with opposing coercion. Shaanan writes, “It is not coercion that bothers them but government, or more likely, democratically elected government.” The real agenda of free-market economists, he alleges, is to achieve corporate hegemony.

dom of opportunity and choice; to others it represents freedom from want and having the bare necessities such as food, shelter and health care.” The former meaning appeals to free-market economists; the latter to Shaanan. He does a service by presenting different viewpoints. But it is unhelpful for him to misrepresent the motivation of critics of regulation by saying that they already have their stuff and don’t want other people to have theirs. One reason economists criticize the minimum wage law is that it denies the least proMisrepresenting market supductive workers the freedom porters / Shaanan generally to work at a wage below the America’s Free Market does offer an accurate por- Myths: Debunking minimum. Having the freetrayal of free-market prin- Market Fundamentaldom to work for a low wage ciples, but there are trou- ism does not make one rich, but bling occasions when he By Joseph Shaanan opting for a low wage is betcaricatures and neglects to 303 pp.; Palgrave ter than being unemployed. cite what free-market econo- MacMillan, 2017 The argument against legal mists actually write. Conmaximum interest rates is sider “Myth 4: Deregulation that they will reduce lending, Always Improves the Economy.” I claim which harms borrowers. Likewise, it is posthat’s a caricature because the economic sible to evaluate “bans on financial predaway of thinking recommends deregula- tion” by focusing on, say, the difficulty of tion when the benefits exceed the costs, defining predation, without concern for which is not always. In his exposition of anyone’s freedom to prey. (By the way, the this supposed myth, Shaanan cites New block quote above footnotes Karl Polanyi’s York Times columnist Thomas Friedman, The Great Transformation as a source, not the who is an accomplished writer but not a work of a free-market economist.) free-market economist. Shaanan recognizes problems with Shaanan’s notions and observations regulation, such as “regulatory capture,” of regulatory affairs are debatable. Ponder whereby industry executives co-opt regulathese ideas: tors who erect barriers to enter the industry and rubber-stamp price increases. “It is Those who enjoy freedom from want not clear,” he adds, “that regulators have often object to extending that freedom the necessary information to set reasonthrough government to many others. able prices.” He also admits that deregulaThey describe such attempts to improve tion was successful “in some industries.” the lives of the many, whether it is a Nevertheless, he devotes much more space minimum wage law, an antiusury law to what he considers problems with deregand bans on financial predation, as ulation. restricting their freedom. “In the airline industry,” he observes, “prices initially declined and new airlines The author acknowledges that freedom were established; however, several airlines means different things to different people. disappeared, at times, because of anticom“To some,” he explains, “it represents free- petitive practices.” Yet, thanks in part to

those lower prices, monthly U.S. air carrier passenger travel is up from about 48 million in January 2000 to 60 million last September. Shaanan attributes airline bankruptcies to “anticompetitive practices,” but competitive behavior (price and nonprice) will also reduce the number of firms in an industry. I make these points to dispute the author’s implication that deregulation of the airline industry did not lead to “the expected competitive outcomes,” which is not to suggest that there is no room for improvement in the airline industry. Housing bubble / There is much to debate over the role of deregulation in the financial crisis of 2008. Shaanan writes extensively about it in “Myth 12: The Government Caused the Crash of 2007–08.” The gist of his characterization of the freemarket perspective is this:

The guilty parties within government, although not exclusively, are Fannie Mae and Freddie Mac—the quasigovernment mortgage agencies that practically gave away money to people who knew or should have known would not be able to pay their mortgage. The Community Reinvestment Act, created to promote home ownership in defiance of fundamental market principles, played a key role in bringing about this state of affairs.

The author cites several authors to deflect blame from Fannie and Freddie for the economic turmoil. “Perhaps most importantly,” he argues, “[Fannie and Freddie] were not involved in subprime lending—a major factor in the crash—until late in the game (2005), at which point they were followers rather than leaders.” Yet economist Russell Roberts of the Hoover Institution has argued that Fannie was in the game early. He quotes Fannie’s CEO, who said, “Fannie Mae was at the forefront of the mortgage industry expansion into low–down payment lending and created the first 3-percent-down mortgage.” (“Gambling with Other People’s Money,” Mercatus Center, April 28, 2010.) Moreover, getting in the game late does not pre-


clude making a difference. “Between 2004 and 2006,” Roberts reports, “[Fannie and Freddie] still purchased almost a million home loans each year made to borrowers with incomes below the median.” Shaanan deemphasizes any damage done by the Community Reinvestment Act (CRA) because “it was private mortgage companies and other financial firms not subject to CRA rules that sold large quantities of subprime mortgages throughout the nation.” I don’t doubt that, but as Charles Calomiris and Stephen Haber have noted, “Community Reinvestment Act loans by banks, as well as the mandates imposed on Fannie and Freddie that effectively forced them to purchase those loans, set in motion a process by which America arrived at debased lending standards for everyone.” (“Strange Bedfellows at the Bank,” National Review, Feb. 4, 2014.) Economists who criticize government policies aim not to exonerate all private-sector actors and lay all the blame on government officials; they indict the two as co-conspirators. Rent-seeking /

Shaanan’s condemnation of cronyism is commendable. He declares: A major weakness of the economic system is that it encourages rent seeking behavior. This means that some of America’s most talented people devote their energies to requisitioning existing wealth, rather than creating new wealth.

He expounds on this in “Myth 2: A Great Wall Separates Politics and the Economy.” The favors that businesses seek from government officials include subsidies, tax breaks, and bailouts. Here are a few examples: For many years, oil, gas, ethanol producers and sugar growers have received large government subsidies. Oil and gas companies and mining companies have also received resources at below market prices. Television stations do not have to pay for use of the spectrum. The 2003 Medicare law banned government from bargaining with pharmaceutical

companies over prices paid for medicine purchased.

Of course, free-market economists are sharp critics of rent seeking. This journal, for instance, often seems like a quarterly exposé of such shenanigans. Yet Shaanan implies that free-market economists shill for corporations. He alleges, “While Milton Friedman and his followers link laissez faire with the defense of individual freedom, in actuality, it is large corporations’ right to profit that is being defended.” The charge is uncharitable if not unfair. Friedman supported free enterprise, not business, and pointed out that losses were essential to the process.

/ Regulation / 65

Readers of America’s Free Market Myths who prefer increased government intervention in the economy will find the book comforting confirmation that corporations exert undue influence, the middle class is stagnating, and greed caused the Great Recession. They might also learn that government officials are not the faithful servants of progressive conceptions of the public interest. Readers who prefer limited government and a larger role for markets, on the other hand, will be challenged to defend the free market against Shaanan’s accusations. Readers of all sorts will learn that cronyism is a real problem and economists of all sorts would do well to battle it.

Against Tribal Instincts ✒ BY PIERRE LEMIEUX


his year marks the 30th anniversary of the publication of F.A. Hayek’s last book, The Fatal Conceit: The Errors of Socialism. Hayek, of course, was one of the major classical-liberal theorists of the 20th century. Many would call him a libertarian, others would deny him the honor—or curse—of that label. He was a

polymath who contributed to many fields of inquiry, including economics, political theory, psychology, and philosophy. In 1974 he shared the Nobel Prize in Economic Sciences with Gunnar Myrdal, a socialist economist who may be remembered for nothing other than that. In the entry on Hayek in the New Palgrave Dictionary of Economics, Bruce Caldwell notes, “If Hayek was in the right place at the right time, it was usually with the wrong ideas, at least from the perspective of most of his contemporaries.” Hayek was a deep, original, and controversial thinker, as The Fatal Conceit illustrates. Economics of knowledge / The main error of socialism, he argues, lies in its rationalist goal of social engineering. Socialists do not see the limitations of reason. They do PIER R E LEMIEUX is an economist affiliated with the

Department of Management Sciences at the Université du Québec en Outaouais. His new book, What’s Wrong with Protectionism? is forthcoming from the Mercatus Center.

not understand that between natural and artificial phenomena, “between instinct and reason,” there is a third category that contains social institutions such as language, morals, and law. Such evolved institutions, which include private property and markets, allow us “to adapt to problems and circumstances far exceeding our rational capacities,” even if we cannot rationally explain their complex benefits. Attempting to remodel society on a rational basis could spell the end of civilization and “destroy much of present humankind and impoverish much of the rest.” The ideas in The Fatal Conceit build on Hayek’s economics of knowledge, developed in the 1930s and 1940s. Consider, say, tin, as he proposed in his 1945 article, “The Use of Knowledge in Society” (American Economic Review 35[4]: 519–530). If the metal becomes scarcer, its price will increase, transmitting to consumers the signal to economize on tin-made goods

66 / Regulation / SPRING 2018 IN REVIEW

and to producers the signal to produce individual elements. In comparison, a more tin. It does not matter whether the government bureaucracy is a very simple cause is that the supply of tin has decreased and ignorant system. “So far as we know,” or its demand has increased, and nobody he hypothesizes, “the extended order is needs to understand that. Through trade, probably the most complex structure in the price signal will be transmitted to con- the universe.” sumers and producers far away. Trade—especially international trade— Market-determined prices incorporate provides the paradigmatic case of an the producers’ local knowlextended order. Trade fuels edge, the traders’ informaspecialization and the divition, and the users’ prefersion of labor. Trade caused “a ences. All information is thus substantial disruption of the taken into account by all. In early tribes,” which contribthis way, more knowledge is uted to new knowledge, the used in society than any indiadvance of civilization, and vidual separately possesses economic prosperity. and any planner could ever The extended order is marshal. Through markets, based on abstract rules, like more is produced of what those of private property and consumers want than any the rule of law, which do not other economic system could impose common goals but The Fatal Conceit: achieve. The Errors of Socialism allow each individual to purMore generally, evolved By F.A. Hayek sue his own personal ends. social institutions use inforAt the polar opposite, the 194 pp.; University of mation eff iciently. They Chicago Press, 1988 tribe is made of individuals represent the results of the who know each other and are interaction of millions of obliged to conform to conindividuals over time, and they incorpo- crete morals based on primitive instincts rate all the underlying knowledge, even and the requirements of collective goals. unconscious knowledge (as most of our As Hayek says, the process of the extended knowledge is, from Hayek’s viewpoint). order of civilization has been to replace In this way, “cultural evolution, and the “common concrete ends” with “general, civilization it created, brought differentia- end-independent abstract rules of contion, individualization, increasing wealth, duct.” The moral rules of an extended and great expansion of mankind.” Hayek order contradict the stifling traditions reminds us that the evolutionary approach of the tribe, but they also restrain primiwas pioneered in social studies by Adam tive urges, which is why so many people Smith and Adam Ferguson in the 18th resist them. century, before Darwin used it in biology. Hayek presents 18th-century French philosopher Jean-Jacques Rousseau as the Extended order vs. the tribe / This efficient modern representative of tribal morals. By use of information allows an “extended forcing individuals to obey concrete orders order” of cooperation among individuals, from the state, socialism undermines the “beyond the limits of human awareness.” extended order in favor of a Rousseau“Every individual becomes a link in many vian conception of the good savage in a chains of transmission through which he tight and ecological society. Contrary to receives signal enabling him to adapt his what Rousseau thought, notes Hayek, “the plans to circumstances he does not know,” primitive is not solitary, and his instinct is Hayek writes. Society is a “complex sys- collectivist.” Let me add that the vision of tem”—he used that expression before it society or “the country” as a team evokes became so omnipresent in science—pro- the tribal order that Hayek finds behind duced by the independent actions of its the socialist agenda.

Hayek sees the process of social evolution as analogous to, but different from, biological evolution. He criticizes sociobiology for focusing on genetic transmission instead of imitation and learning in the formation of the rules we follow. He writes: The gradual replacement of innate responses by learnt rules increasingly distinguished man from other animals, although the propensity to instinctive mass action remains one of several beastly characteristics that man has retained. … The decisive change from animal to man was due to such culturally determined restraints on innate responses.

In Chapter 7 of The Fatal Conceit, titled “Our Poisoned Language,” Hayek brings to our attention the fact that our usual language is often biased toward primitive ways of looking at the world, toward smallgroup morals. Consider, for instance, the routine personification of society and the glorification of everything “social.” In reality, he reminds us, collective utility “exists as little as collective mind.” Ode to diversity / Contrary to the homogeneous tribe, the extended order is based on diversity—real diversity. A few pages of The Fatal Conceit sing an ode to diversity. For example:

Civilization is so complex—and trade so productive—because the subjective worlds of the individuals living in the civilized world differ so much. Apparently paradoxically, diversity of individual purposes leads to a greater power to satisfy needs generally than does homogeneity, unanimity and control—and also paradoxically, this is so because diversity enables men to master and dispose of more information.

Hayek emphatically rejected the “conservative” label. The postscript of his Constitution of Liberty (1960) was titled “Why I Am Not a Conservative.” In The Fatal Conceit, he suggests that sexual mores should naturally change when the purpose of previous social taboos no longer exists:


I believe that that new factual knowledge has in some measure deprived traditional rules of sexual morality of some of their foundation, and that it seems likely that in this area substantial changes are bound to occur. … I am entirely in favor of experimentation—indeed for very much more freedom than conservative governments tend to allow.

He adds that “the development of variety is an important part of cultural evolution, and a great part of an individual’s value to others is due to his differences from them.” Hayek has always been as ahead of his time as he has been controversial. He is also far from being a progressive. An important implication of accepting the spontaneous social order is that “social justice” is meaningless. There can be no social justice or injustice in an order that develops spontaneously. Evolution cannot be just or unjust. Only an individual can be just or unjust. Too conservative?

/ In some ways, The

Fatal Conceit appears to be Hayek’s most conservative book. Speaking about philosopher W.W. Bartley III, who helped an ailing Hayek finish the book (and provided a complete edition of Hayek’s work), Caldwell mentions that “questions have been raised about how much of the book should be attributed to Bartley and how much to Hayek.” Yet, The Fatal Conceitseems in line with Hayek’s three-volume Law, Legislation and Liberty (1973–1979). But the earlier Constitution of Liberty arguably represents a younger Hayek who was more classical-liberal. Already visible in Law, Legislation and Liberty, a sort of absolutist traditionalism colors The Fatal Conceit. Hayek goes so far as to say that “man became intelligent because there was tradition … for him to learn” (his emphasis). Everything is fine as long as tradition generates a liberal extended order. But it does not always. What if tradition veers toward the tribal model? As Hayek seems to admit, this happened in Sparta, and

ultimately in Egypt, Athens, the Roman Empire, and China. If, as he also admits, “the expansion of capitalism—and European civilization—owes its origins and raison d’être to political anarchy,” should statist traditions be obeyed? Should we just wait and see, and embrace whatever comes out of the system? At a certain point, traditions become stifling. Traditions should be revered, but only up to a point. A related concern is raised by Chapter 8, titled “The Extended Order and Population Growth.” Societies that adopted the morality of the extended order have prevailed over primitive societies, Hayek argues, in large part because their traditional rules have allowed for the creation

have a moral claim to preservation.” He understands the Eskimo practice of abandoning senile members to die before the tribe’s seasonal migration because it may have allowed them to save their offspring. He admits the “individual’s right voluntarily to withdraw from civilization,” but questions any “entitlements” to those who do that. “Rights derive from systems of relations of which the claimant has become a part through helping to maintain them,” he explains. The entitlement point is close to the libertarian argument that individuals should not be forced to subsidize others, but one can easily imagine tyrannical drifts and exclusions—whereby, for example, rednecks and other “deplorables” would be excluded from society.

There can be no social justice or injustice in an order that develops spontaneously. Evolution cannot be just or unjust. Only an individual can be just or unjust.

of more wealth and thus a more numerous population. Against environmentalists and Malthusian types, he claims that population growth fuels diversity, the division of labor, and productivity. But are the results of group selection by cultural evolution necessarily good? Hayek says he does not make that claim, but a simpler one that a return to tribal morals would “doom a large part of mankind to poverty and death.” So far so good, and we might forget libertarian concerns for primitive tribes displaced by the advance of civilization—provided the displacement respects certain humanitarian constraints. But the author of The Fatal Conceit goes a bit further: “There is in fact no reason to expect that the selection by evolution of habitual practices should produce happiness.” Is this morality sufficient? Aren’t we back to a simplified utilitarianism where individuals can be sacrificed to the existing order? Appendix D of the book also suggests tensions between the individual and the spontaneous order of evolved institutions. Hayek claims that “not even all existing lives

/ Regulation / 67

Conclusion / In The Fatal Conceit, Hayek defends the extended order of the market as well as the traditional institutions that produced and maintained it (in some parts of the world, at some time in history). But he does not really consider what to do when a conflict appears between evolved traditions and individual liberty. Has he gone too far in his reverence for tradition? Have we lost meaningful individual consent in the process? Whatever the answer, the broad points made in The Fatal Conceit remain valid and provide a welcome antidote to the deification of lawmakers. The fact that people only venerate ideal lawmakers, not those they actually observe, should help deflate the state’s aura. Unfortunately, most people believe that the big problem is that the Red team is in power instead of the Blue, or vice versa. Hayek shows that traditional rules embedded in the extended order of a free-market society provide a better way of coordinating individual actions than commands from lawmakers and other political authorities. Traditions, of course, must remain open to criticism, but this does not mean they should be actively challenged by a social-engineering state.

68 / Regulation / SPRING 2018 IN REVIEW



arely do I enjoy a book by an author with whom I fundamentally disagree on the book’s topic. But Brian Steensland, a sociology professor at Indiana University–Purdue University Indianapolis, has written such a book. The Failed Welfare Revolution was first published in 2008, but it has now been reissued as a paperback, probably because of renewed interest in a guaranteed annual income (GAI) program. In the book, Steensland traces the development, in the U.S. context, of the idea of a GAI or a negative income tax (NIT) from the 1940s to President Richard Nixon’s serious attempt to implement a version during his first term in office, to later discussions of the idea within Jimmy Carter’s administration. Steensland would probably dislike the title of this review. He sees the defeat in the U.S. Senate of Nixon’s proposed Family Assistance Plan, which the House of Representatives passed in 1970 by a vote of 243–155, not as dodging a bullet but, on the contrary, as a huge lost opportunity. But one doesn’t have to agree with his perspective to learn from his book. Steensland gives a detailed account of the various proposals for some version of a GAI, starting with Lyndon Johnson’s administration, through the Nixon, Gerald Ford (briefly, given his two years in office), and Carter administrations. Although Steensland is a sociologist, he exhibits a basic understanding of the effects of economic incentives on work and family dissolution. And refreshingly, his book is relatively free of cheap shots at those with whom he disagrees. He tries hard to understand their views rather than merely dismissing them as unworthy. That makes it relatively easy to judge the arguments of the various players and come to conclusions

DAV ID R . HENDER SON is a research fellow with the Hoover Institution and emeritus professor of economics at the Graduate School of Business and Public Policy at the Naval Postgraduate School in Monterey, CA. He was the senior economist for health policy with President Ronald Reagan’s Council of Economic Advisers.

different from Steensand’s. In particular, I found the arguments against Nixon’s proposal by some of Nixon’s own staff much more convincing than Steensland did. Early efforts / Most people who know the history of the negative income tax, which is a form of GAI, associate it with the University of Chicago’s Milton Friedman, who proposed such a scheme in his famous 1962 book Capitalism and Freedom. The idea was to have the U.S. Treasury make a payment to people whose incomes were, for whatever reason, below some low level—thus the term “negative income tax.” Then, if their income from sources other than the U.S. Treasury increased, their payment from the Treasury would fall by 50¢ for every additional dollar of income. Steensland gives Friedman due credit but also points out that in 1946 George Stigler, later to be Friedman’s colleague at Chicago, had proposed such a plan as an alternative to the minimum wage, which, as Stigler noted, destroyed jobs for the unskilled. Nothing much happened on the NIT front for more than a decade after Stigler’s 1946 proposal. But Michael Harrington’s 1962 book about poverty in America, The Other America, helped bring attention to it. In that book, notes Steensland, Harrington “argued that it was worse to be poor in an affluent society than in one in which most of the populace was poor.” Steensland doesn’t comment on that claim but I will. If envy of those around you eats at you then, yes, Harrington probably is right. But if you are poor and can see that many people around you are relatively

wealthy, that can spur you to take initiative; if poverty is defined by real income, I would rather be poor in the United States than in, say, Colombia. In the early 1960s, some liberal/left economists and others started pushing for an NIT or GAI. Among them were economists Robert Theobald and James Tobin. Theobald proposed a flat income grant that would go to every U.S. (presumably adult) citizen. How left-wing was Theobald? Steensland notes that he described left-wing economist John Kenneth Galbraith’s The Affluent Society as “extraordinarily conservative.” Around the same time, many people’s view of the GAI changed, partly because of an influential 1964 article in the Yale Law Journal by legal scholar Charles Reich. Friedman and others had argued that government should give money to the poor because that is a way for taxpayers to help them. But Friedman was always clear that the poor didn’t have a right to an income provided by government. Reich, however, “argued that welfare benefits were a statutory right once a recipient had established eligibility.” During Johnson’s presidency, his newly formed Office of Economic Opportunity (OEO) considered various versions of a GAI or NIT. In 1967, the OEO started a huge, expensive social science experiment to estimate the effect of an NIT on work. The group chosen for the experiment consisted of low-income families with an “employable man” between ages 18 and 58. The reason for the experiment, notes Steensland, is that the OEO and others were concerned about how much an NIT would discourage work effort, especially by men. The experiment ran for many years—the results finally came out in 1978—and yet the people proposing various versions of an NIT did not want to wait for the conclusions. Partly because of the race riots of the mid-1960s, many advocates—including representatives from big businesses—pushed for some version of a GAI. However, the high budgetary cost of the Vietnam War in the last few years of Johnson’s time in office helped put a halt to such a program.


Steensland notes that a proposed NIT At least four big issues divided the would have added $5 billion to the federal various advocates. The first was their government’s budget, while a program of view about why people were poor. Some family allowances (a payment for every believed that poverty was an inevitable child in every family) would have added result of being unable to find work in an $6–14 billion. increasingly technological workplace that Steensland doesn’t point out just how required more skills. A 1964 report titled high a number this was, so I will. Total “The Triple Revolution” predicted that in federal spending in fiscal year 1967 was the near future, 6 to 8 million jobs would $158 billion; that means the cheaper disappear. When the report was published, program, the NIT, would have increased 69 million Americans were employed in government spending by over 3%, and the civilian jobs; by 1970 the number was up most expensive version of the to 79 million. Oops. Others family allowance would have thought that the labor marincreased it by 9%. ket would give work to anySteensland claims that one who wanted it, but that the escalating cost of the war even some of those who got caused domestic spending to jobs would still be poor. contract sharply, but that’s A second issue that divided not true: domestic federal them was their view of the spending increased even as deserving and undeserving the war’s cost rose. He mispoor. Some of the advocates takenly uses the fact that didn’t think in those terms: if Johnson proposed a surtax people were poor, whatever the on corporate and individual The Failed Welfare cause, that was justification taxes as evidence of a shrink- Revolution: America’s enough for the federal goving budget for domestic Struggle over Guaran- ernment to help them. Others teed Income Policy spending: taxes are on the revsaw a strong distinction: the enue side of the federal bud- By Brian Steensland deserving poor were the sick 304 pp.; Princeton get, not the spending side. and infirm; the undeserving University Press, 2018 poor were those able-bodied Nixon’s FAP / From the outpeople who weren’t working. set, Nixon wanted some kind The third issue concerned of major overhaul of what he called “the rationality and paternalism. Some thought welfare mess.” He was worried about pov- that the poor were capable of making good erty but also about more racial unrest. He decisions for themselves; others thought couldn’t know then that the worst of the that if the government gave them money, racial unrest was over. many would make bad decisions on how to Within days of being elected in Novem- spend it and on whether to work. ber 1968, Nixon had appointed political The fourth issue was dependence. Some scientist Richard Nathan, a researcher at advocates worried that an NIT or a GAI the Brookings Institution, to run a task would create dependence, while others force to make recommendations about didn’t worry about that. welfare. In its December 1968 report, the You can take one side or the other on task force suggested incremental changes each of the above issues. But as one of to the existing system. Once Nixon took Nixon’s main advisers on welfare, Danoffice, a number of players within the iel Patrick Moynihan, became famous for administration, including holdovers from saying, “Everyone is entitled to his own Johnson’s administration, weighed in on opinion, but not to his own facts.” I repeat the various issues. Steensland does a great the quote because Moynihan, in pushing job of laying out each major faction’s con- for one version of a plan, told Nixon a cerns, viewpoint, and proposed policies. whopper. In an April 1969 letter to Nixon,

/ Regulation / 69

Moynihan wrote, “For two weeks’ growth in the Gross National Product you can all but eliminate family poverty in America.” Um, no. The typical annual growth rate in those days was just above 3%, so two weeks of growth was about 0.12% of GNP. Even the cheapest plan would have cost well over half a percent of GNP, which is more than four times Moynihan’s estimate. One of the most interesting parts of Steensland’s narrative is his discussion of the role of White House staffers Arthur Burns (later chairman of the Federal Reserve Board) and Martin Anderson, a young economist who was instrumental in getting rid of military conscription. They disliked the idea of a GAI and an NIT on the grounds that the programs would lead to the view that welfare is a right. Burns also argued that it would reduce low-skilled workers’ incentive to work. On this issue I found Steensland to be particularly fair. He dismisses the idea that their arguments were “rhetoric cloaking economic interests.” He writes that they “had no contact with business elites” and did not “have any reason to dissemble.” After listening to the various players, Nixon put together his Family Assistance Plan (FAP) his first summer in office and did a full-court press to get it into law. It was an NIT in which the loss of subsidy for every dollar of income beyond some level was 50¢, an implicit marginal tax rate of 50%. Families of four with earnings up to $3,920 would have been eligible for a government subsidy. The plan was set to begin in 1970 or 1971, when the median family income was just shy of $10,000. Although Nixon’s proposed law passed overwhelmingly in the House, it died in the Senate Finance Committee, which voted it down 10–6. One reason was that conservative Democrats joined some conservative Republicans in voting against it. Another reason was that the National Welfare Rights Organization, a group supported by people on welfare, saw—correctly—that most of the benefits would go to the working poor, not to those who, like them, were already on welfare. They wanted more for themselves, plain and simple. As a result,

70 / Regulation / SPRING 2018 IN REVIEW

three liberal senators voted against it. In 1972 Nixon, running against George McGovern, wanted to distinguish himself from McGovern on the issue of welfare. McGovern had proposed a “demogrant” of $1,000 per person. It was, in essence, a GAI. Not everyone would get a check: the phaseout of the $1,000 would be complete when a family of four had a total income of $12,000. McGovern’s proposal met with so much flak that he backed down. But Nixon, seeing a chance to distinguish himself from his opponent, attacked the “‘welfare ethic’ that could cause the American character to weaken.” Steensland notes the irony: “It was Nixon’s proposal that would expand the provision of benefits to thirteen million additional people, while McGovern’s new plan [in place of the demogrant] would maintain the existing number of people on the rolls.” Even though the FAP was dropped, it did leave a policy legacy. Nixon introduced Supplemental Security Income (SSI), a special welfare program for people who were elderly, blind, or disabled. And in 1975, President Ford and Congress started the Earned Income Tax Credit (EITC), a kind of negative income tax for people who are employed. The optics on both of these were very different from those on the FAP. SSI was viewed as giving help to some of the most deserving poor. The EITC was viewed as an incentive for people to work more, not less, even though past some income level the EITC would phase out, reducing the incentive to work. Later efforts / When Carter became president in 1977, he pursued welfare reform. His marching orders to Joseph Califano, his secretary of health, education, and welfare, were to give options for comprehensive reform that would cost no more than the current system. Califano, responding to that directive, counted spending on a number of other government programs as part of his baseline so that he could give himself more running room to propose increased benefits or to extend benefits to a larger group. With the tax revolt that began with the

overwhelming passage of California’s Proposition 13 in June 1978, though, moves toward anything like a GAI or an NIT were dead. But the EITC was expanded in 1978 with little opposition. With the election of Ronald Reagan, the GAI and NIT were both dead. In 1978, notes Steensland, the results of the NIT experiment came out and showed that the NIT reduced work by employed men by 5–10%, which was less than many people had feared. But the reduction for married mothers was 20–25%. Steensland claims that welfare critic Charles Murray used the results of the NIT experiment “in exaggerated ways,” but he doesn’t say how. One striking result of the experiment was that marital breakups for participants receiving NIT benefits were 60% higher than for those in the control group. Interestingly, Steensland—who advocates a GAI or an NIT—expresses no concern about

this. He devotes not a single word to the effects of marital breakup on children. Because there is a renewed push today for some form of GAI—even some libertarians advocate it—there’s a lesson to be learned from Steensland’s book. Some libertarians claim that we could have a reasonable GAI by cutting all other meanstested benefits. This turns out to be false, as I showed in “A Philosophical Economist’s Case Against a Government-Guaranteed Basic Income” (Independent Review 19[4], 2015). But even if it were true, the Nixon proposal was for a major expansion of welfare spending with no offsetting cuts in means-tested programs such as housing aid and food stamps. The lesson: libertarians who push for a GAI will end up being in an alliance whose main constituents want more government spending. And that’s what they’ll probably get.

Were Federal Prosecutors Really Chicken? ✒ REVIEW BY VERN MCKINLEY


here was plenty of outrage to go around after last decade’s financial crisis. Advocates for limited government were livid at the use of public financial commitments to save institutions that should have been allowed to fail. Progressives agreed with that sentiment, but they were also angry about the lack of prosecutions

of financial executives. Jesse Eisinger, a Pulitzer Prize–winning journalist for ProPublica, is in the latter camp. In his new book, The Chickenshit Club, he takes on the Justice Department bureaucracy that decided against those prosecutions. The book’s provocative title comes from a story Eisinger tells in the introduction. James Comey, who would later make plenty of headlines as head of the Federal Bureau of Investigation, was speaking to a collection of federal prosecutors shortly after his appointment as the U.S. attorney for the Southern V ER N MCK INLEY is a visiting scholar at George Washing-

ton University Law School and author, with James Freeman, of Borrowed Time: Two Centuries of Booms, Busts and Bailouts at Citi, forthcoming from HarperCollins.

District of Manhattan, the epicenter of Wall Street prosecutions. He asked his audience: “Who here has never had an acquittal or a hung jury? Please raise your hand.” A number of prosecutors, proud of their perfect records, thrust their hands into the air. “Me and my friends have a name for you guys,” Comey continued. “You are members of what we like to call the Chickenshit Club.” The point of embarrassing the hand-raisers was that acquittals and hung juries indicate the prosecutor is unafraid to try cases that aren’t clear-cut—or in the words of Eisinger as he summarizes Comey’s philosophy, “to be bold, to reach and to aspire to great cases, no matter their difficulty.”


/ I am particularly inter- torical background on the evolution of ested in this topic of prosecutions because prosecutions from the takedown of Enron early in my career I worked for the Federal and WorldCom during the early 2000s to Deposit Insurance Corporation and Reso- the hands-off approach of the late 2000s. I lution Trust Corporation as they cleaned believed that Eisinger would commit most up broke banks and savings and loans dur- of the book to case studies of the most ing the 1980s and 1990s. As part of that highly publicized financial institution failcleanup, there were thousands ures during 2008 and 2009 of convictions of individuals and the relevant key execufor major financial institutives, and try to make an tion crimes—mostly directors argument for why and under and officers of institutions, what charges the DOJ should as well as attorneys, accounhave pursued them. tants, and other professionHowever, an all-in case als. All indications were that study approach is not the these were bad actors who path he chose. A quick review abused their positions solely of just a few examples from to enrich themselves. the book’s index bears this Consistent with that view, out. Angelo Mozilo of CounEisinger runs through an trywide, who settled with the The Chickenshit encapsulated history of what Club: Why the Justice SEC and avoided a civil trial he calls the “Boom, Bust, Department Fails to and criminal prosecution, and Crackdown” cycle. After Prosecute Executives receives all of two pages of the 1929 stock market crash, By Jesse Eisinger discussion. Dick Fuld, Ian Congress created the Securi- 400 pp.; Simon and Lowitt, and Erin Callan of ties and Exchange Commis- Schuster, 2017 Lehman Brothers, who colsion, a permanent platform lectively deployed some very for pursuing those commitquestionable accounting ting financial malfeasance. After the stock practices, likewise receive two pages each. market run-up of the 1960s, government Eisinger does go through case studies cracked down on professionals involved of John Paulson, who put together the in corporate fraud. After the banking and Abacus 2007 mortgage deal for Goldman savings-and-loan failures and the junk- Sachs, and Joseph Cassano, who headed up bond blow-up of the 1980s and 1990s, the AIG’s Financial Products Group. But my Department of Justice prosecuted many expectation was that the bulk of the book top executives at failed institutions, and would be these types of case studies, rather Michael Milken went to jail. Finally, after than just a few isolated instances. the tech bubble popped at the start of this century, top officers of Enron, WorldCom, The builders / Instead, Eisinger makes it through nearly half the book before he and others also did time. But something then changed, accord- reaches 2009, when “the Obama people came in to the Department of Justice.” He ing to Eisinger: spends that first part of the book tracing By contrast, after the 2008 financial crithrough the heyday of the development of sis, the government failed. In response enforcement and prosecutions, and then to the worst calamity to hit capital showing how the regime slowly unravmarkets and the global economy since eled during the 2000s. Stanley Sporkin the Great Depression, the government spent two decades at the SEC from the did not charge any top bankers. early 1960s to the early 1980s, building Case studies / This book is quite a bit difthe capacity of its enforcement division as ferent than I had expected when I ordered its director. “By the end of his run at the it. I thought that it would have some his- SEC in 1981, Sporkin had become a hero A strong hand

/ Regulation / 71

regulator feared by Corporate America,” Eisinger writes. “He would come to be known as the ‘Father of Enforcement.’” Eisinger then connects Sporkin to a kindred spirit: To prosecute crimes, [Sporkin] needed allies at the Department of Justice. The Southern District had an effective monopoly on white-collar enforcement at the time. Main Justice and other offices around the country played little role. So Sporkin looked north and discovered friends in the Southern District of New York. He found one prosecutor particularly excited about his brand of justice, a brilliant, young, and aggressive lawyer eager to attack corporate and securities scofflaws: Jed Rakoff.

Rakoff spent much of his early career as a prosecutor in the Southern District of New York and was ultimately appointed to that district’s bench by President Bill Clinton. In an interesting twist of the weaving storyline, Rakoff would later become more widely known for criticizing a proposed settlement offered by the SEC in a case involving Citigroup in 2010, arguing, “The court has not been provided with any proven or admitted facts upon which to exercise even a modest degree of independent judgement.” The last gasps of a tough DOJ enforcement regime can be traced to January 2003 and a memo by George W. Bush administration deputy attorney general Larry Thompson in the wake of the Arthur Andersen collapse. The memo set out a tough critique of how firms were complicit in protecting bad actors: “Companies … purport to cooperate while impeding exposure of the full account of a company’s wrongdoing.” Eisinger summarized Thompson’s philosophy: “Pervasive bad behavior, a lack of contrition, a phony compliance program—his Department of Justice would treat these transgressions sternly. Members of the white-collar bar howled in outrage.” A different direction / One of the strands of

the countervailing movement against pros-

72 / Regulation / SPRING 2018 IN REVIEW

ecuting individuals was initiated by Mary Jo White during her tour of duty as U.S. attorney for the Southern District of New York. Deferred prosecution agreements (DPAs) allowed the DOJ to make relatively quick work of their investigations. These involve a nice quid pro quo for both sides: the DOJ extracts a large fine and the targeted institution avoids a long, expensive, and public fight. DPAs also have obvious downsides, as Eisinger explains: Since these settlements lacked transparency, the public didn’t receive basic information about why the agreement had been reached, what the scale of the wrongdoing was, and which cases prosecutors never took up. How could the public know how tough they were, really?

Also helping along the movement away from prosecutions was the development of the revolving door of lawyers moving from the DOJ “farm team” to the big defense law firms, and sometimes back to the DOJ again. No firm better characterized this trend than Covington & Burling and its alum, Eric Holder. After graduating from Columbia Law School, Holder began his career on the DOJ staff. After a stint as a judge, Clinton first appointed him as a U.S. attorney and then deputy attorney general. Holder then went to Covington & Burling during the George W. Bush administration, only to return to Justice as the attorney general during the Obama years. Although progressives may have had high hopes that the Obama administration would seek “Old Testament vengeance” against executives in the financial industry, Eisinger makes it clear that this was unlikely to happen. In its first year, the department was cautious and overwhelmed by political infighting. One former top official at Justice likened it to a soccer game with 6-year-olds, where everyone clusters around the ball, and it never advances.

A Financial Fraud Enforcement Task Force, announced with great fanfare in

late 2009, made little progress. After being called on the carpet for his lack of progress in 2013, Holder resorted to Chicken Little warnings, words that would be ridiculed as the “Too Big to Jail” problem: “I am concerned that the size of some of these institutions becomes so large that it does become difficult for us to prosecute them when we are hit with indications that if you do prosecute—if you do bring a criminal charge—it will have a negative impact on the national economy, perhaps even

the world economy.” Holder, who later recanted the comment, was the most highprofile of the spinners of the revolving door, but by no means the only one. He is now back at Covington. If you are interested in the procedural changes and capture of Justice, you will find the Chickenshit Club a good read. But if, like me, you were hoping for a little less procedural history and a lot more case studies of whether the DOJ did indeed act cowardly, then you will be a little disappointed.

Imagine What Future Generations Will Think of Us ✒ REVIEW BY DWIGHT R. LEE


xcept for being more creative than most, Peter Leeson, the Duncan Black Professor of Economics and Law at George Mason University, is a perfectly normal economist. Because he is an economist, however, he is not a normal human being, as many readers will realize as they read this book. Leeson expects this realization and welcomes

it, as he explains early on when discussing his title, WTF: An Economic Tour of the Weird. Consider some examples of things that are obvious to economists but bewildering to most people. Economists believe that a country can increase the incomes of its workers by importing products that they can produce with less effort and fewer resources than workers anywhere else in the world, a belief that is dismissed as bizarre by most normal people. When economists argue that there is an optimal amount of traffic fatalities, or environmental damage from pollution, or sexual harassment, or child abuse, most people are too appalled to appreciate the logic behind those arguments. Despite the powerful case economists make that the cost of goods increases when government subsidizes the goods’ DW IGHT R . LEE is a senior fellow in the William J. O’Neil

Center for Global Markets and Freedom, Cox School of Business, Southern Methodist University. He is a co-author of Common Sense Economics: What Everyone Should Know about Wealth and Prosperity, 3rd edition (St. Martin’s Press, 2016), with James Gwartney, Richard Stroup, Tawni Ferrarini, and Joseph Calhoun.

purchase, politicians continue to win elections by providing consumer subsidies. And it is easy to predict the reaction to economists when they propose reducing the threat to endangered species by making it legal for people to own some of them and profit by letting trophy hunters shoot them. There are many other examples of economists seeing as sensible things that most people see as weird (or worse). Leeson is not primarily concerned, however, with defending the “weird” views of economists. His book instead explains why several historical practices that many readers will consider outrageous were actually quite sensible given the understandings and incentives that prevailed at the time. His explanations are based on the highly commendable way economists consider the practices of others. When we see or hear of others doing things that make no sense to us, it is easy to dismiss them as irrational if not downright stupid. Economists try to avoid this “stupidity” assumption because it often obscures


reasonable, albeit sometimes difficult to understand, explanations. The powerful understandings economists have gained over the years were made possible by assuming that people pursue their interests—both narrow and noble—in rational ways given the constraints and incentives they face. In order to develop these understandings, Leeson makes a serious effort to find out the legal details, social norms, and institutional incentives that constrained the choices people faced in bygone times. He then applies economic theory creatively to understand why people facing those constraints benefited from the seemingly weird and outrageous practices. His narrative finds him conducting a sort of museum tour with seven exhibits. He begins with some introductory comments on the importance of curiosity and being open to the possibility that if a practice survives for a few generations, it may have been socially sensible no matter how weird it seems today. I cover only four of his exhibits in this review, with overview discussions of his detailed but very accessible analysis.

dence was considered too ambiguous for earthly judgement. The rationale for trials by ordeal was to “let doubtful cases be settled by the judgement of God.” The accused wasn’t required to accept a trial by ordeal, but refusing the offer would suggest guilt. The astonishing thing is, as Leeson explains, “it seems that ordeals actually exonerated the majority of people who underwent them” (his emphasis). He finishes with a compelling argument that, given the prevailing beliefs and primitive technology of the day, trials by ordeal were sensible ways to reach decisions on guilt or innocence in tough cases. Furthermore, he argues that practices similar to “medieval-style ordeals” are used today to help determine criminal innocence or guilt, such as lie detector tests (which depend on the stress that test takers feel about lying when being monitored) and swearing an oath on the Bible. Honoring and obeying until sold

/ Even trained econo-

mists may initially be appalled by Leeson defending the practice of letting men sell their wives to the WTF: An Economic Hot water / The first exhibit, Tour of the Weird highest bidder at public auctitled “Burn, Baby, Burn” By Peter T. Leeson tions. Yet, this is what men (Chapter 2), examines the 264 pp.; Stanford could do in 18th–19th cenpractice of “trial by ordeal” University Press, 2017 tury England. He explains that was common in Europe in Chapter 3 that the pracin the 9th–13th centuries CE. tice benefited both men and The ordeals could involve heat or cold. In women, but particularly women. one example, the defendant would “‘plunge Private sales were allowed, but they typihis hand into … boiling water’ and pluck it cally took place in public auctions because out.’” If the defendant was innocent, his husbands expected open competition to hand would be “safe and unharmed,” peo- yield the highest price. Not surprisingly, ple believed, but if guilty his hand would husbands generally claimed the “merchan“show burn injuries on inspection three dise” possessed fine attributes, but some days later.” Trial by ordeal could also find were “shockingly honest” in acknowledgthe accused carrying a “burning hot iron ing flaws. For example, one husband’s nine paces.” The accused would be judged pre-auction description said his wife was innocent if his hands were not damaged a “tormentor, a domestic curse, a night and guilty if they were. invasion, and a daily devil.” Trial by ordeal was held in reserve Most sales were final, though one husfor those being tried for the most seri- band “allowed his wife’s purchaser to try her ous crimes, and then only when the evi- out for three days, after which, if the buyer

/ Regulation / 73

was unhappy, he could bring her back for 50 percent of her purchase price.” Also, some purchasers bought wives with the intention of improving and flipping them. The bid for a wife was not always in cash. “Alcohol was a popular supplement to cash compensation.” Other cases included “lottery tickets, dinners, and a donkey” being used for payment. It is easy to be outraged today at the thought of a woman having to endure the humiliation of her husband trading her for a donkey. But Leeson’s argument recognizes that few of us today have enough knowledge of the 18th century English legal system to judge how an Englishwoman might have felt about being publicly auctioned off. Even with knowledge of that legal system we might still, under the influence of today’s norms, express outrage at the restrictions it imposed on the choices of Englishwomen almost 300 years ago. But our norms are irrelevant to understanding what was sensible in England, or anywhere else, long ago. That fact counsels our being open to Leeson’s argument that the woman we assumed was humiliated was more likely overjoyed because of recent improvements in the English common law. Auctioning off a wife was one those improvements because, by allowing her husband to get an ass in exchange, it made it possible for her to get rid of one. Admittedly, the improvements in English common law that increased the options available to women were painfully slow. But Leeson’s argument moves us closer to understanding why the legal status of women improved faster in England, and countries influenced by the English common law, than anywhere else in the world. Can lawyers put exterminators out of busi-

/ In large parts of Europe in the 15th–17th centuries, insects, rodents, and other vermin could be taken to ecclesiastic courts for being too aggressive. As Leeson explains in Chapter 7, these trials were apparently conducted with the utmost seriousness, with “distinguished


74 / Regulation / SPRING 2018 IN REVIEW

judges ordering crickets to follow legal instructions, dignified jurists negotiating a settlement between farmers and beetles, and a decorous court granting a hoard of rat defendants a continuance on the grounds that some cats prevented them from attending their trials.” Defendants were summoned to appear in court by reading the summonses where the defendants were generally found. If they failed to appear after three summonses, the court could convict them. Occasionally, however, some representatives of the charged species were brought to court for trial, and then released into their natural habitat to communicate the verdict to their fellow criminals after it was reached. Convicted pests were also notified that the penalty for refusing to turn themselves in was excommunication from the Holy Church. How to explain bringing legal action against pests for behaving naturally? Part of Leeson’s attempt is found in Chapter 5, where he explains the social benefits from medieval clerics’ curses on those with whom they disagreed. He points out that a guilty verdict on destructive pests was an ecclesiastic curse widely considered to be as effective as those imposed on sinful humans. Furthermore, he argues that these judicial curses were perceived to reduce the damage from pests. This allowed the clergy to hold onto their legitimacy where it was being threatened by critics of the Church. The legitimacy allowed the clergy to continue generating general benefits with comforting promises to their flocks of a pleasant afterlife and less aggravation from pests during this life. With even more confidence, it allowed the clergy to continue capturing financial benefits with “thinly veiled threat[s] of continued plague if citizens continue to evade their tithes.”

/ In 12th–13th century England, good records on land ownership didn’t exist. For this and other reasons related to the feudal social structure, allocating land though markets was plagued with high transactions costs, resulting in regular disputes

More entertaining than a title search

over land ownership. Courts attempting to resolve these disputes often turned to heavenly help by arranging combat between representatives of the two disputants—the assumption being that God would favor the rightful owner. As Leeson explains in Chapter 8, these representatives, referred to as champions, were chosen for their physical prowess. Indeed, some disputants tried to corner the market on brutes to limit their availability to opposing disputants. They believed in God, but they had also heard

before the battle started, the disputants decided to settle. When that happened, the battle often took place anyway to avoid disappointing the crowd.

Second best / A generic explanation for people’s initial negative reaction to these old practices is grounded in the economic “theory of the second best.” The theory holds that efforts to eliminate a market inefficiency can sometimes make the market less efficient. Therefore, it is theoretically possible that adding a specific market inefficiency—a second-best solution— These practices are second-best will improve market effisolutions. When people in the past were ciency generally. subject to a weird belief or practice, it’s A plausible case can possible they were made better off. be made that claimed “second-bests” often are not more efficient on net. It depends on the existhat God helps those who help themselves, tence of particular conditions, and creating a steady stream of market inefficiencies even in underhanded ways. From an economic perspective, how- is an unlikely way to improve economic ever, the problem was not just allocating performance. In essence, the practices Leeson highland to rightful owners, but to owners who would make the most valuable use of it. lights and explains are second-best soluLeeson discusses an interesting parallel tions. He recognizes that his variation of the between combat competition for God’s theory applies only under suitable condifavor in land disputes and rent-seeking tions that have varied over time and place. today for politicians’ favor in the competi- But when those conditions are right, what I tion for government largess, even though see as Leeson’s “theory of the weird second he considers the former to be more pro- best” supports his conclusions. This theory ductive than the latter. He points out can be stated as follows: When people in the that “trial by battle enabled judges faced past were subject to a belief or practice that with [poor property rights] to do what we consider weird today, it is possible that the Coase theorem could not: get the land they were made better off by the addition of into the hands of the person who valued it another belief or practice that we consider more.” He also points out that after King even weirder. As Leeson states: “Seeming Henry II introduced reforms that “marked senselessness on top of seeming senselessthe birth of English common law and the ness = pretty damn sensible.” beginning of feudalism’s end in England,” there was a marked decline in the cost of No reason to be smug / In this Twitter/ trading land in markets—and a disappear- flame-war age, feeling superior to others has become a global pastime. Feeling ance of trial by battle. Of course, progress always comes at superior to those who lived long ago has a cost. Leeson does say that the battles the additional advantage that our predebetween champions were usually not very cessors cannot now defend themselves. dangerous for the combatants. They also But Leeson gives them a champion. They attracted large crowds when land disputes would applaud with enthusiasm when he were on the court’s docket. Commonly, questions whether some of our modern-


day practices are any more sensible than some of the historical practices we now dismiss as weird (or worse). We can only guess what future generations will think about some of our practices, which are often supported by large numbers of highly educated, overtly compassionate, and consistently smug voters. For example, governments try to reduce poverty with government programs that penalize those who reduce their own poverty through productive activity. Politicians proclaim the importance of education and employment to poverty reduction, but protect school systems and pedagogies that render disadvantaged populations poorly educated. Worse, the same politicians often defend—and tighten—minimum-

wage laws and occupational licensing laws that leave these same people unemployed and vocationally untrained. Government tries to reduce monopoly power by restricting mergers while reducing competition by restricting imports, creating regulations making it more difficult for small firms to grow or survive, and favoring the largest firms with bailouts and tax breaks. Any good public choice economist could list more examples of political practices that future generations will consider every bit as weird as the practices Leeson discusses. Maybe several hundred years from now a future Leeson will try to explain how our political weirdness made even a modicum of sense in the 20th and 21st centuries.

Above All, Try Something— Anything ✒ REVIEW BY PIERRE LEMIEUX


reader can find interesting ideas in Straight Talk on Trade by Dani Rodrik, an economist who teaches at Harvard’s John F. Kennedy School of Government. But that reader will also find much that is puzzling. Rodrik claims that he is looking for “a better balance” in trade debates. He defines his own “trilemma”: “It is impossible,” he writes, “to have hyperglobalization, democracy, and national sovereignty all at once; we can have at most two out of three.” In his mind, democracy and national sovereignty forestall hyperglobalization. Democracy and hyperglobalization prevent national sovereignty. And hyperglobalization and national sovereignty—well, those two are simply antithetical. So the trilemma is not perfect—but he did write “at most,” and you get the message. Where he uses the prefix “hyper” is revealing. He also could have claimed PIER R E LEMIEUX is an economist affiliated with the

Department of Management Sciences at the Université du Québec en Outaouais. His new book, What’s Wrong with Protectionism? is forthcoming from the Mercatus Center at George Mason University.

that it’s impossible to have globalization, hyperdemocracy, and national sovereignty; or globalization, democracy, and hypernational-sovereignty. But globalization is his focus—and his villain. For Rodrik, free trade requires political integration among trading partners. But this is not true. It is perfectly possible for two citizens of two sovereign nations to trade freely with one another provided only that they are not forbidden to do so by their respective governments. As Paul Krugman has argued persuasively, each national government can still regulate as it wishes, tweaking comparative advantages but not killing free trade. In fact, it suffices that your government allows you to trade freely (that is, unilateral free trade) for your side of free trade to work. No political inte-

/ Regulation / 75

gration or world governance is required for that, nor is it necessarily desirable. It is true, though, that the more free trade there is, the more difficult it is for a national government to have broad control of the nation’s economy. That is a feature, not a bug. But as we will see, broad government control is what Rodrik wants. Strange arguments / The author of Straight Talk on Trade does not see the difference between freedom and coercion. One of his memorable sentences is, “Globalization’s rules should not force Americans or Europeans to consume goods that are produced in ways that most citizens in those countries find unacceptable.” The sentence seems to mean that international trade rules should not allow some Americans (say) to consume goods from other countries with different labor and environmental regulations if many other Americans oppose that. In Rodrik’s mind, allowing individual Americans to do so is the same as forcing them to do so. He often seems to fumble basic economic concepts. For instance, in wondering if developing countries will have anything to export as they become innovative and rich, he forgets that comparative advantage depends on the ratio of domestic costs, and that as long as these ratios are not the same, international exchange is beneficial for all sides. In speaking about the big winners from trade, he ignores the consumers. He often sees the mote of market failures, but not the beam of government failures. To be fair to him, he sometimes recognizes the latter—but as we’ll see, he overlooks them when their existence proves inconvenient. Rodrik does discuss public choice analysis, which he refers to as “rationalchoice political economy.” But he also claims that politics “aggregates a society’s risk preferences.” He thus ignores that the aggregation of preferences is a major problem that requires either incoherent or imposed preferences, as Kenneth Arrow demonstrated in 1951. He also ignores that a voter acts rationally when he votes his ideology instead of his interests, because

76 / Regulation / SPRING 2018 IN REVIEW

his single vote will not change the results of working onsite at large banks? the election and therefore will do nothing In a few astonishing pages of Chapter 5, for his interests. he seems to argue in favor of 18th century He is obsessed with manufacturing and mercantilism over free trade. He recogpublic investment, as government planners nizes that mercantilism takes the side of were six decades ago. He thinks like these the producer against the consumer and planners, calculating prothat it “offers a corporatist ductivity and constructing vision.” Yet, he seems to view future growth paths. Rodrik mercantilism-corporatism advocates a “green industrial as preferable to “the liberal policy” and “institutional approach”—here taking “libengineering.” In passing, he eral” in its classical liberal gives a little hat tip to Bolivsense, contrary to what he ian president Evo Morales, usually does. who infamously argued that Strange ideology / How can we he had a human right to run explain all these quirks, inconfor a third presidential term sistencies, and errors from an despite what the Bolivian conintelligent economist? stitution declared and what Straight Talk on Trade: I see only one satisfactory a referendum confirmed. Ideas for a Sane World explanation. Since the heyThe fact that development of Economy days of welfare economics, poor countries did not take By Dani Rodrik economists have known that off until their governments 336 pp.; Princeton public policy recommendareleased their grip does not University Press, 2017 tions and evaluations—which persuade Rodrik. are Rodrik’s bread and butSometimes his logic is fragile or his rhetoric fuzzy. He criticizes ter—require that ethical judgments be current “free-trade agreements” (he him- superimposed on economic analysis. Ideself puts the term into quotation marks) ally, these normative values should be as containing too little free trade, but pro- clearly identified and should not interfere poses to make them even less free-trade. with the economic analysis proper. This is He seems to blame businesses for “tak- not always easy to do, and Rodrik proves it. So what are his normative values? They ing advantage of government subsidies abroad,” but accepts domestic subsidies— are very different from the ones that inspire as if stealing from fellow citizens is morally free-trade economists, which are based on superior to accepting gifts from foreign consumer sovereignty and economic freedom. He accuses economists of being influtaxpayers. His memory is sometimes selective. Dis- enced by their moral values when defendcussing the role of the state in the Great ing free-trade, but he commits the same Recession, he sees only a savior. He for- transgression in his criticisms of free trade. What are Rodrik’s values? He usefully gets that mortgage-backed securities—the financial instrument at the center of the distinguishes between mere “majority crisis—had been introduced by Ginnie rule” and “liberal democracy.” However, Mae, a federal agency created by Congress this “liberal” is not the classical liberal for this purpose in 1968. Rodrick sees label. It seems to mean anything that conmalignant deregulation everywhere, even forms to his own preferences as a “progresthough interventionist regulation has been sive” who believes in “social justice,” “social growing nearly non-stop since 1960 at inclusion,” “social purpose,” and “societal least. (See “A Slow-Motion Collapse,” Win- welfare” (whatever “societal” means as ter 2014–2015.) Does he ignore that before opposed to the standard term “social”). last decade’s recession, the New York Fed “Climate change” is another of his conhad hundreds of regulating bureaucrats cerns. No wonder that he had his social

and inclusive feathers ruffled by Donald Trump’s democratic election. In reality, Rodrik is not so different from Trump or Bernie Sanders. All three oppose “market fundamentalism” and favor “fair trade.” “What makes a populist like Donald Trump dangerous,” he writes, “is not his specific proposals on trade. It is the nativist, illiberal platform on which he seems to govern.” That is, the problem is Trump’s motivation, not his intended results. Returning to Rodrik’s concerns that some Americans will import foreign goods that challenge his values, he refers to such imports pejoratively as “social dumping.” He idealizes democracy and “democratic deliberation,” apparently unaware of voters’ rational ignorance and the irrationality of their decisions. “Democratic politics is messy and does not always get it ‘right,’” he admits. “But when we have to trade off different values and interests, there is nothing else on which to rely.” Nothing else? Has he ever heard about private property and the market as a means to reconcile “different values and interests”? “Markets,” he says, “require other social institutions.” Of course. And other institutions require other institutions. But the question is, do these institutions include Leviathan? He thinks so. Leviathan incarnates collectivist shibboleths like the “national interest,” “social goals,” “societal demands,” and so forth. For Rodrik, everything is political and must be decided by the collective—that is, a progressive majority that thinks like he does. To be fair, he does recognize that majoritarian democracy must be restrained, but what is to be restrained is not so much political power as its capacity to produce results that he doesn’t like. He writes of capital controls (regulating how a state’s own citizens may use their money abroad) that they “may need to be blunt and comprehensive rather than surgical and targeted.” And he compares them to gun control: they must cover all citizens as opposed to only controlling “problematic behavior.” Who will doubt that the author of Straight Talk on Trade is a good, card-carrying progressive?


/ It seems that Rodrik only thinks of problems in terms of intervention by some authority. In his mind, the only alternative is between two sorts of authoritarianism: a good one run by leftists and a bad one run by the right. The libertarian notion that authoritarianism should be avoided never seems to cross his mind. Although he portrays himself as a dissenter against “the establishment,” “the elites,” and “the reigning market fundamentalist ideology,” Rodrik is a good representative of the privileged few who have ruled America and most Western countries since the 1960s: half-capitalist and half-socialist, half-populist and half-elitist, half-democratic and half-authoritarian, half-free-trade and half-fair-trade, half-postmodern and half-moralizing, half-bourgeois and half-punk. Such folks have spent more than a half-century burdening people with a dense network of regulation and surveillance, continually bossing ordinary people around, and pragmatically building a half-police-state. How was that different from the “case-by-case, hard-headed pragmatism” that Rodrik advocates? Contrary to what he claims, it is not free-traders who have provoked the populist reaction, but the privileged class of which he is himself a member. It is because

Member of the failed establishment

of people like him that populist and protectionist Trump was elected. Political wonderland /

To be a progressive whose heart bleeds both for inequality at home and poverty in the world must be stressful. Rodrik invents a political wonderland where both problems disappear through the magic of protectionism and dirigisme. But, of course, the correct side must rule: A crucial difference between the right and the left is that the right thrives on deepening divisions in society—“us” versus “them”—while the left, when successful, overcomes these cleavages through reforms that bridge them.

He is right about the danger from the right, but he is totally blind to the symmetric danger from the left. Both sides are inclusive, it’s just that they don’t include the same people. The Harvard professor defends the nation-state because, at the international level, “we do not agree” on values and tradeoffs. He does not seem to realize that “we” don’t agree at the national level either; witness the outcome of the last presidential election. He does not understand that “live and let live” is the only peaceful solution.

/ Regulation / 77

Rodrik also wants voters to be “globally aware and environmentally conscious,” and the state to be perfect, and he has a bridge to sell you in New Jersey. What does all that mean for international trade? His argument against free trade is basically the following: The democracy I like is incompatible with hyperglobalization, so let’s have less free trade. He defends the old nation-state because that’s where the Leviathan he wants can dwell. The state should be free to impose on its subjects what the majority has decided. Imports and capital flows can interfere with this, so let’s limit those. Less freedom of trade would give all governments more “policy space”—which, Rodrik strangely claims, would fight poverty, inequality, and exclusion. The state must be free to intervene. Approvingly quoted by Rodrik, Franklin D. Roosevelt said, “Above all, try something.” Just don’t do that individually. Besides all the problems I have mentioned, Straight Talk on Trade is a loose patchwork of stuff already published elsewhere. Many statements lend themselves to different interpretations, on a spectrum that goes from the soft establishment up to the near-chavismo. Of course, there is always something to learn anywhere. With some books, though, the cost is higher than the benefit.


Mortgage Regulation “Regulating Household Leverage,” by Anthony A. DeFusco, Stephanie Johnson, and John Mondragon. October 2017. SSRN #3046564.


he Dodd–Frank Wall Street Reform and Consumer Protection Act of 2010 mandated that lenders evaluate a borrower’s “ability to repay” (ATR) when originating a mortgage. However, Congress created a class of mortgages called “qualified mortgages” (QM) that are automatically deemed to satisfy the ATR rule. This designation includes all mortgages eligible for Fannie Mae and Freddie Mac guarantees. Thus, in PETER VA N DOR EN is editor of Regulation and a senior fellow at the Cato Institute.

practice, the ATR rule only affects loans with principals above $453,100 in 2018—so-called “jumbo” loans. The ATR rule as implemented by the Consumer Financial Protection Bureau (CFPB) required non-QM recipients to have a debt-to-income ratio (DTI) no greater than 43%. How did lenders respond to the rule? This paper compares the interest rates on jumbo loans before and after the QM rule. Rates increased by 0.10 to 0.15 percentage points per year for DTI above 43%, or 2.5%–3% relative to rates before the rule. In addition, the quantity of high–DTI jumbos was reduced by 15% (2% of all jumbo loans). So lenders increased prices and rationed credit. Had this rule been in place before the housing bust, would it have decreased the number of defaults? The authors estimated

78 / Regulation / SPRING 2018 IN REVIEW

the relationship between DTI and default probability in a sample of loans originated between 2005 and 2008. While higher DTIs are generally associated with increased default probabilities, there was no difference in probability of default for those jumbo loans in the regions just above and below the 43% cutoff. This suggests that the policy would not have improved mortgage performance had it been in effect during 2005–2008.

CAFE Standards

challenges by the EU if the acquiring firm was non-EU. For that time period at least, the EU wasn’t using antitrust as a form of protectionism.

Economics of Energy Booms “Who Wins in an Energy Boom? Evidence from Wages, Rates, and Housing,” by Grant D. Jacobsen. May 2017. SSRN #2972681.


recurring theme in economists’ evaluations of regulation is that incumbent firms use regulation to raise the costs of their competitors. This paper searches for that phenomenon in 2007’s tightening of the Corporate Average Fuel Economy (CAFE) vehicle fuel efficiency standard on automobiles. Historically the standard was uniform: a sales-weighted average of 27.5 miles per gallon for all cars. The revised standard, effective with the 2011 model year, varied by the “footprint” of the vehicle. The largest cars needed to get 28 mpg while the smallest cars needed 36 mpg in 2012. The author of this paper, Arik Levinson, notes that domestic cars are larger than imports, thus a CAFE standard that grants larger vehicles less stringent fuel economy requirements benefits U.S. manufacturers. “The switch to footprint-based standards in 2012 granted the average U.S.-assembled vehicle an extra 0.62 mpg, and cost the average imported vehicle 0.68 mpg, for an overall difference of 1.3 mpg,” he writes. Given the fine of $55 per vehicle per mpg, the effective tax on imports is $71.50 per vehicle.

ow has the increase in oil and gas production from hydraulic fracturing changed the economic fortunes of people living in the rural areas where that extraction takes place? In this paper Grant Jacobsen offers some estimates of these effects. He defines an energy boom area as a non-metropolitan area (NMA) in which annual gas and oil revenues were at least $500 million greater in 2011 than in 2006. Under this definition, 10% of NMAs were energy boom areas. Forty percent of NMAs had some energy production and 50 percent had none. Jacobsen compares various outcomes in boom and non-boom areas. In boom areas, population increased by 5.7%, wage rates by 7%, house values by 12.5%, and rents by 5%. Wages went up across occupations—even those not related to oil and gas—because the labor supply proved less elastic than demand. And he found “no evidence that the boom increased the cost of rent when measured as a percentage of household income.” Jacobsen concludes: “The results indicate that there are many monetary ‘winners’ from energy development in local communities and very few losers. An implication of the results is that bans on drilling have negative monetary consequences for a large share of local residents.”

Antitrust in Europe

Nudges and Electricity Pricing

“Is EU Merger Control Used for Protectionism? An Empirical Analy-

“Default Effects and Follow-On Behavior: Evidence From an Elec-

sis,” by Ann Bradford, Robert J. Jackson Jr., and Jonathon Zytnick.

tricity Pricing Program,” by Meredith Fowlie, Catherine Wolfram, C.

July 2017. SSRN #3003955.

Anna Spurlock, Annika Todd, Patrick Baylis, and Peter Cappers. June

“Environmental Protectionism: The Case of CAFE,” by Arik Levinson. Working paper, Georgetown University. August 2017.



nother policy arena in which regulation is alleged to increase rivals’ costs is antitrust. Anecdotes suggest that the European Union uses its antitrust regulation to advantage European producers over U.S. firms seeking greater economies of scale through merger. For instance, in 2001 the EU blocked General Electric’s acquisition of Honeywell even though the U. S. Justice Department had approved the acquisition. The EU also stopped proposed mergers by Boeing, Time Warner, and UPS. To see if the anecdotes do indeed reflect a larger pattern by the EU, the authors of this paper examine the universe of proposed mergers from 1990 through 2014 (5,000 cases). After controlling for the usual explanations of antitrust concerns, the authors found no effect on the incidence or intensity of merger

2017. NBER #23553.


n important distinction between behavioral and traditional neoclassical economic analysis is the former’s emphasis on “default effects,” the tendency of people to remain in their original state of affairs. The most famous realworld example of this is the tendency of individuals to save more in employer-sponsored 401k retirement-savings plans if they are enrolled automatically in the plans but have the option to opt out, relative to saving when employees are automatically not enrolled in a plan but have the option to opt in. Traditionally, electricity prices faced by consumers have not varied over time even though the marginal cost of production is higher on a summer afternoon than during a spring or fall night. Even though the installation of “smart” electric meters now allows


consumer electricity prices to vary by time, 95% of U.S. residential customers pay time-invariant electricity prices. Regulation has published the results of how electricity consumers react to dynamic pricing from some pilot programs. (See “Moving Forward with Electricity Tariff Reform,” Fall 2017.) But how would consumers respond under different scenarios in which consumers have the option to opt in or opt out of different pricevariant regimes? That is what this paper explores. It examines a Sacramento, CA electricity pricing experiment over the years 2011–2013 in which 174,000 households were randomly assigned to five groups: A control group that paid a traditional time-invariant price, in this case 9.38¢ per kilowatt hour for their first 700 kWh of consumption and 17.65¢ per kWh afterward. ■ A second group that could opt into time-of-use (TOU) pricing. That pricing was 27¢ per kWh on weekdays 4–7 p.m., and 8.46¢ per kWh for the first 700 kWh of off-peak consumption and 16.6¢ per kWh for off-peak consumption above 700 kWh. ■■ A third group that was assigned to the same TOU pricing but participants could opt out. ■■ A fourth group that could opt into critical peak pricing (CPP) of 75¢ per kWh 4–7 p.m. on 12 critical days between ■■

/ Regulation / 79

June 1 and September 30, with prices at other times the same as the TOU groups. ■■ A fifth group that was assigned to a CPP/TOU scheme like the fourth group, but participants could opt out. The authors’ findings reflect behavioral economists’ discovery that initial assignment matters. Only 20% of the consumers assigned to the two groups that required opt-in to the TOU or CPP/TOU plans actually opted in. Yet over 90% of those who were assigned to TOU or CPP/TOU stayed in those programs and did not opt out. The effects of higher prices on consumption did vary by whether the customers were assigned or volunteered. Complacent consumers who were assigned to TOU or CPP/TOU but did not opt out decreased their consumption by about 10% given the higher prices, while those who actively opted in decreased their consumption by about 25%. However, the complacent customers assigned to TOU or CPP/ TOU had an aggregate reduction in electricity consumption that was twice as large in TOU and three times larger in CPP/TOU as compared to consumers in the opt-in groups. Such savings made the programs cost-effective overall. In contrast, in the opt-in CPP/ TOU program, costs equaled benefits, while the opt-in TOU program was not cost effective.

Real Activism Real Results


80 / Regulation / SPRING 2018

Brother, Can You Legally Spare a Dime?


he United States has a serious problem: its people are too kind to the homeless. Granted, this is not the impression one gets from reading the popular press. To hear them tell it, the rich are forever grinding the faces of the destitute beneath their boot, and only brawnier, more vigorous government can do anything to alleviate the suffering. As it turns out, brawny and vigorous government sometimes has very different ideas. Just ask Greg Schiller, who got into trouble with the authorities in Elgin, IL last December because he let a dozen or so homeless people stay in his basement on bitterly cold nights. Schiller would provide them snacks and play a movie—something G-rated on account of his Christian faith. But an anonymous tipster reported him to authorities for operating a de facto boarding house and Schiller had to stop hosting movie nights for the people he described as his friends. Elgin officialdom took the view that “Schiller’s property did not comply with codes and regulations,” the New York Times explained. Perhaps that’s just one crazy anecdote. We shouldn’t let Schiller’s experience with Elgin leave the impression that government hassles people for helping the homeless. But what about Juan Carlos Montesdeoca? The cosmetology student ran afoul of authorities in Tucson, AZ when he started giving the homeless free haircuts. He did it “out of the kindness of my heart. Out of the memory of my mom, because she lost her hair,” he told a local TV station. But again, somebody complained. So the Arizona State Board of Cosmetology began investigating him for breaking Arizona A. BARTON HINKLE is the editor of the editorial pages of the Richmond Time-Dispatch.

state law, which stipulates, “A person shall not perform or attempt to perform cosmetology without a license or practice in any place other than in a licensed salon.” Montesdeoca was not (yet) a licensed professional, and he was giving haircuts at the library. The horror. Fortunately, the story outraged Arizona Gov. Doug Ducey, who called on the cosmetology board to cease its investigation. He subsequently issued an executive order directing the state’s boards and commissions to review their licensing requirements. There wasn’t such a happy ending in El Cajon, CA. In January, authorities there arrested more than a dozen people for feeding the homeless. The same thing happened a couple of years before in San Antonio, where chef Joan Cheever received a ticket with a fine of up to $2,000 for feeding the homeless in a public park. The year before that, nonagenarian World War II vet Arnold Abbott was arrested for feeding the

homeless in Fort Lauderdale, FL. At that time, more than 70 municipalities around the country explicitly outlawed feeding the homeless. Perhaps the officials in those localities figure it’s better for the homeless to risk starvation than food poisoning. Or maybe they think the homeless can find sustenance elsewhere. After all, they can always beg for money. Or not. The National Law Center on Homelessness and Poverty (NLCHP) has reported that from 2011 to 2014 the number of cities that prohibit panhandling rose 25%. Three-fifths of cities have some form of panhandling ban (although recent court decisions are causing some to rethink them). Some municipalities also bar the homeless from having a place to sit or lie down. The NLCHP reports that many localities outlaw loitering (65% prohibit loitering in certain public places; one-third prohibit loitering outright), sitting or lying down in public (53%), sleeping in a vehicle (43%), and camping in public (34%). Basically, in much of the country being both homeless and stationary is prohibited. Jogging, apparently, is fine—so long as you don’t stop to rest or ask for money. The impetus behind all these prohibitions is understandable. Municipal leaders want their localities to present a pleasing aspect to the world, and the presence of unkempt, smelly homeless people in public areas is not conducive to that wish. (If only the homeless could somehow get free haircuts!) That’s why Honolulu passed a raft of anti-homeless legislation in 2014. “We absolutely had to,” explained the head of the city’s tourism authority. “The No. 1 reason that people were saying they would not come back to Hawaii was because of homelessness.” Just so. Having homeless people walking the streets and sleeping in the parks is not a good look for local governments. But then what are the homeless to do? Perhaps localities could find a way for the homeless to stay with private citizens — at least on bitterly cold nights, when it’s … oops. Never mind.



Exceptional in consistently publishing articles that combine scholarly excellence with policy relevance.



ato Journal is America’s leading free-market pub-

lic policy journal. Every issue is a valuable resource

for scholars concerned with questions of public policy, yet it is written and edited to be accessible to the interested lay reader. Clive Crook of The Economist has called it

“the most consistently interesting and provocative journal of its kind.” Cato Journal’s stable of writers constitutes a verita-

ble Who’s Who in business, government, and academia. Contributors have included James M. Buchanan, Richard Epstein, Kristin J. Forbes, Milton Friedman, Robert Higgs, Vaclav Klaus, Justin Yifu Lin, Allan H. Meltzer, Charles Murray, Wiliam Niskanen, Douglass C. North, José Piñera, Anna J. Schwartz, and Lawrence H. White. Three times a year, Cato Journal provides you with

solid interdisciplinary analysis of a wide range of public policy issues. An individual subscription is only $22 per year. Subscribe today!


For more information call (800)767-1241 or visit Save more with multiple years Individuals Institutions




$22 $50

$38 $85

$55 $125


Non-Profit Org. U.S. Postage


Hanover, Pa 17331 Permit No. 4

1000 Massachusetts Avenue N.W. Washington, D.C. 20001 (202) 842-0200 | Fax: (202) 842-3490 Address Correction Requested @RegulationMag

Douglass_Regulation_4C.qxp_Layout 1 2/23/18 4:12 PM Page 1

New from the Cato Institute


ublished in commemoration of the bicentennial of his birth, Frederick Douglass: Self-Made Man takes a fresh look at the remarkable life

of one of the foremost thinkers in American history, his ideas, and his enduring principles of equality and liberty. Weaving together history, politics, and philosophy, this new biography, published by the Cato Institute, illuminates Douglass’s immense scholarship and personal experiences and shows how one man’s pursuit of individual “boundless freedom” remains vital and inspirational. PAPERBACK: $14.95 • EBOOK: $9.99 AUDIO AVAILABLE ON AUDIBLE.COM AVAILABLE IN BOOKSTORES NATIONWIDE.