Columbia Economics Review: Fall 2014

Page 1

Columbia Economics Review

one foot in the door once you go black markets into the (bretton) woods the italian job guns, gems, and steal rue in rio, sorrow in s達o paulo food for thought

Fall 2014 Vol. IV No. I

Beyond New York

our international issue


2

Fall 2014

COLUMBIA ECONOMICS REVIEW PUBLICATION INFORMATION Columbia Economics Review (CER) aims to promote discourse and research at the intersection of economics, business, politics, and society by publishing a rigorous selection of student essays, opinions, and research papers. CER also holds the Columbia Economics Forum, a speaker series established to promote dialogue and encourage deeper insights into economic issues.

2014-2015 E D I T O R I A L B O A R D EDITOR-IN-CHIEF

Daniel Listwa

CONTENT SENIOR EDITORS

EXECUTIVE EDITOR

MANAGING EDITOR

Ashwath Chennapan

Victoria Steger

David Froomkin Hong Yi Tu Ye Julie Tauber Omeed Maghzian Sarika Ramakrishnan

EDITORIAL DIRECTOR

Hong Yi Tu Ye

PUBLISHING EDITORS

Aman Navani Bryan Schonfeld Bingcong Zhu

PUBLISHING EDITORS

Carol Shou Derek Li Eitan Neugut Francis Afriyie Larry Xiao Nancy Xu Willis Robbins Zachary Neugut

CONTRIBUTING ARTISTS

Daniela Brunner Desislava Petkova Emily Callison Gemma Gene Camps Mira Diyal (cover)

OPERATIONS STAFF EDITORS

ECONOMICUS PRESIDENT

Maren Killackey

EXTERNAL AFFAIRS OFFICER

Mitu Bhattatiry

Boosik Choi Christopher Sabaitis Cindy Ma Daniel Morgan Raymond de Oliveira Richard Chiang Sanat Kapur Yassamin Issapour Zepeng Guan

MULTIMEDIA EDITORS

SOCIAL MEDIA OFFICERS

Aryeh Goldstein Mariko Fujimara ILLUSTRATORS

Adil Mughal Emily Callison Mira Dayal Noa Herman

Jing Qi Leon Wu

COPY STAFF

Diana Li Max Rosenberg Maximilian Martin Richard Yee Vincent Dongju Yu-Tao Lin

A special thanks to James Ma and Rui Yu, Editor-in-Chief and Executive Editor 2013-2014, and to the entire 2013-2014 Editorial Board. Congratulations to the Columbia Economics Review Class of 2014!

Columbia Economics Review would like to thank its donors for their generous support of the publication

We welcome your comments. To send a letter to the editor, please email: economics.columbia@gmail.com. We reserve the right to edit and condense all letters.

Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License

Columbia Economics | Program for Economic Research

Printed with generous support from the Columbia University Program for Economic Research Columbia Economics Review


Fall 2014

TABLE OF CONTENTS

International Trade & Finance 4

One Foot in the Door Interest Group Influence on U.S. Trade Policy and the US-Korea Free Trade Agreement

16

Once You Go Black Market... Official and Black Market Exchange Rates in Latin America

Historical Analysis 19

Into the (Bretton) Woods The Motivations and Collapse of the Bretton Woods System

Applied Microeconomics 23

The Italian Job Voting Preference and Information Aggregation in the 2013 Italian General Elections

31

Guns, Gems, and Steal Revisiting the Political Resource Curse: An Ideological Blessing?

38

Rue in Rio, Sorrow in S達o Paulo (Unsuccessfully) Predicting the Winner of the 2014 FIFA World Cup

Macroeconomics 47

Food for Thought The Big Mac Index as a Proxy for Purchasing Power Parity

52

Special Feature: An Interview with Prof. Perry Mehrling America, the ECB, and Charles P. Kindleberger

For a complete list of papers cited by our authors and a full version of all editorials, please visit our website at econmag.org

Opinions expressed herein do not necessarily reflect the views of Columbia University or Columbia Economics Review, its staff, sponsors, or affiliates.

Columbia Economics Review

3


4

Fall 2014

COLUMBIA ECONOMICS REVIEW

guns, butter, and

ECON OM I C US the new online magazine of Columbia Economics Review find it @ econmag.org Columbia Economics Review is proud to announce the arrival of its new online journal, Economicus. If you are interested in writing a piece to be featured on the EconMag site, email economics.columbia@gmail.com with the subject line “Write for Econ [your name]“. We welcome submission on all subjects relating to economics and are interested in a variety of forms, including multimedia. Please be sure to include your name and university affiliation in the body of the email.

Columbia Economics Review


Fall 2014

A LETTER FROM THE EDITORS Dear Readers, If you were to sum up today’s Zeitgeist—spirt of the time—in a single word, you could do worse than to choose ‘Globalization.’ As flight routes, fiber optic cables, and satellite transmissions increasingly crisscross the planet, economies around the world are stitched tighter and tighter together, forming an integrated web which demands to be looked at as a whole. For the economicallyinclined, this invokes a challenge and a responsibility. It is no longer enough to take an interest merely in what’s going on around the block, or even in the state next door. The next generation of economists must instead be prepared to enter into a wider arena—to speculate on the efficiency of from the black markets of Argentina, (p.16) to the voting booths of Italy (p.24), and right back across the Atlantic to the soccer fields of Brazil (p.39). It is with this expansive and exploratory spirit that we proudly announce our new online magazine, Economicus (EconMag.org), an online platform for education, discourse, and debate the reaches through the spectrum of economics and related fields. It is our hope that students—both young and old—will utilize Economicus as a window to engage with issues both on and beyond their campuses, enabling them to step into the wider national and international economics community. It is in this same vein that we present you now with the International Edition of the Columbia Economics Review, a selection of fascinating and rigorous articles covering topics from around the world. The purpose of our journal is to both educate and spark discussion, so before we leave you to delve into these papers, we would like to frame our selections with a fundamental question: how do you compare economies that are worlds apart? GDP is the go-to answer. Not a day goes by that these three letters do not appear in the business section of a newspaper. GDP is a statistic that supposedly distills everything that matters in the economy into a single number. A statistic by which whole governments can fall or rise. A statistic that economists and policymakers spend their entire careers studying, worrying over, and trying to incrementally improve. When the IMF’s latest projections of China’s GDP came out in early October, the media’s main focus was on the implications for American economic primacy and the advent of the Chinese century. Most people did not stop to think about the nature of the number itself. Like water and air, we have become accustomed to taking GDP for granted and forgetting its origins. Such is GDP’s prominence that in 2001, the Bureau of Economic Analysis named it “one of the great inventions of the 20th century.” But is this little statistic all it’s cracked up to be? The definition of GDP is one of the first thing that economics students learn – formally, it is the value of all final goods and services produced within a country over a specified time period, calculated by aggregating consumption, investment, government spending, and next exports. Yet, this was not always the case. Though the notion of a national income had been around since the 17th century, it was only in the wake of the Great Depression and the Second World War, that serious efforts were put into defining and quantifying this idea. Simon Kuznets and Colin Clark were the first ones to calculate national income statistics for the US and the UK, respectively, in the 1920s and 30s. In particular, Kuznets’ GNP estimates (later it became GDP) were first published in a 1934 report to Congress, noting how the US economy had been halved between 1929 and 1932. It is worth asking whether GDP is as appropriate to today’s world as it was to the world of the 1940s. In Diane Coyle’s GDP: A Brief But Affectionate History, (a book that we highly recommend to anyone interested in exploring the topic further), she cogently points out several inadequacies in using GDP to measure today’s modern economies. Crucially, GDP fails to account for variety in goods and services. As Coyle argues, variety is “one of the key indicators of economic development” and that “to be poor is to have little choice available.” In microeconomic terms, we postulate that a representative agent obtains a higher utility level from optimizing over a larger choice set, subject to a given budget constraint. In more relatable language, variety possesses a value that GDP fails to capture. GDP fails to measure the level of diversity in an economy because of its focus on total output. In the eyes of GDP, 100 types of candy with one unit for each type are the same as 100 units of one candy. Therefore, Coyle describes GDP as a “poor way to measure innovation and customization. One of the main sources of growth for a modern economy is innovation, which is reflected in product diversity, and without a way to measure that we fail to capture the essence of a modern economy. Finally, Coyle believes that an “increasing share of advanced economies [are] made up for services and ‘intangibles’” GDP loses relevance when it comes to the service sector because it fails to separate quality from quantity. For instance, a musician would be considered more productive to GDP if by performing a piece at double speed and giving twice as many performances; a nurse more productive if he or she attended to twice as many patients. But it is obviously more important to hear the music at its proper tempo, and for a nurse to spend quality time with fewer patients. In this same vein, GDP also fails to include the customer benefits that come with “intangible” goods/services that are offered free of charge, such as Google and Facebook. There is no question that these internet services are valuable to modern societies; yet, because they lack a price tag, they are not accounted for in GDP calculations. In an increasingly intangible, innovative, and fast-moving world, the limitations of GDP are becoming ever more clear. What do you think? Does “GDP” deserve its status as “one of the great inventions of the 20th Century,” or has it outstripped its usefulness? It is a live issue and germane to many of the articles in this issue. As such, we look forward to exploring these questions and others in the pages to follow and to keeping up the conversation with you on Economicus. All the Best, Hong Yi Tu Ye, CC’15 | Editorial Director Victoria Steger, CC’15 | Managing Editor Daniel Listwa, CC’15 | Editor-in-Chief

Columbia Economics Review

5


6

Fall 2014

One Foot in the Door Interest Group Influence on U.S. Trade Policy and the US-Korea Free Trade Agreement Matt Chou Columbia University

This paper investigates the intersection of international trade policy and our nation’s political economy. In order to ensure an in-depth analysis of this subject, the paper first takes a look at the theoretical framework of how interest groups are effective in influencing our policy decisions in congress. The author starts by examining major interest groups that may have a hand in shaping these decisions, while trying to quantify their effects on rulings in congress. Next, he explores a juxtaposition of legislative ideology and voter preferences and compare these factors to our previously defined views on interest group influences. Finally, the paper utilizes a real world example to verify whether the voting outcome in Congress agrees with our hypothetical findings. To do this, he conducted econometric analysis on the 2011 U.S.- Korea Free Trade Agreement, which helps describe the relevance of interests groups in influencing and distorting past and future U.S. foreign trade policy. -L.X.

One major area where politics and economics intersect is the realm of international trade policy. As a paper by I.M. Destler makes clear, trade policy formation stems from a process quite different from other political issues (e.g. foreign policy or national security), couched in economics and addressed primarily by Congress (as opposed to the executive branch) via constitutional mandate.1 With Congress playing such a crucial role in trade policy, examining the determinants of Congress’ trade policy decision-making becomes especially relevant. This is no easy task, as “legislative uncertainty is in fact the norm for trade policy owning to its essentially redistributive 1 I. M. Destler, American Trade Policymaking: A Unique Process. The Domestic Sources of American Foreign Policy. Edited by James M. McCormick. Plymouth: Rowman & Littlefield Publishers, 2012., 301.

nature that creates gainers and losers.”2 Nevertheless, as Destler and other scholars (e.g. Facchini et al. 2011) suggest, one key factor influencing Congress in this domain is interest groups, who exercise power though myriad mechanisms, particularly campaign contributions. This paper thus examines the degree of influence interest groups have over trade policy in four sections: first, outlining theoretical reasons for how interest groups might be effective; second, identifying empirically important interest groups and quantifying their effects; third, reviewing two competing factors which may seem to counterbalance interest group influence; and lastly, conducting a econometric analysis of the 2011 U.S.-Korea free trade

agreement to observe whether its voting outcome in Congress fits with previous theoretical and empirical findings. This paper finds strong evidence for the continued relevance of interest groups in U.S. trade policy, suggesting the existence of policy distortions and the need for further research.

2 Kishore Gawande, and Bernard Hoekman, “Lobbying and Agricultural Trade Policy in the United States.” International Organization. 60. no. 3 (2006): 552

3 Benjamin O. Fordham and Timothy J. McKeown, “Selection and Influence: Interest Groups and Congressional Voting on Trade Policy.” International Organization. 57. no. 3 (2003): 523.

Columbia Economics Review

Theoretical Base for Interest Group Influence Theory suggests that interest groups shape legislative voting through two pathways, which have been coined the “selection effect” and “influence effect” by the literature.3 The selection effect entails the recruitment, nomination, and


Fall 2014

election of political candidates. The influence effect, on the other hand, refers to interest groups affecting the votes of sitting legislators through tools such as lobbying, bargaining, and the provision of policy analysis and information. The Selection Effect The selection effect applies to both political parties and candidate ideology. Distinguishing between party and ideology is intentional—although the two are highly correlated, reasons for interest groups to intervene in the selection process may hinge on independent factors stemming from either party or ideology.4 With regards to supporting a preferred political party, interest groups may invest in a candidate only because that candidate’s victory could affect which party has a majority in Congress. Influencing majority rule is especially salient due to structural advantages afforded the majority party, which include control of at-large and committee leadership positions. As a result, interest groups have 4 Ibid., 524.

an incentive to support candidates with weak or uncertain ideological preferences if the candidate belongs to a party that, as a whole, takes more favorable positions relative to other parties. Alternatively, if parties have similar policy preferences, then the ideology of individual candidates becomes more important in terms of whom to support. Ideology-based selection has two forms: support for candidates that generally favor an interest group’s aims, and/or support for candidates that align on a specific issue with an interest group. The former type of influence comes into play when interest groups are concerned about many issues and are unsure which will be relevant during a potential politician’s term. Additionally, whereas securing specific binding commitments from candidates is difficult, supporting candidates whose overall perspective may help an interest group is less so. For example, business interests with global market interests might select individuals with an internationalist, free-trade perspective, whereas domestic labor interests may select candidates on the opposite end of the Columbia Economics Review

7

spectrum. The latter type of ideology-based selection (i.e. issue-specific) can be used either separately, when interest groups have a very narrow set of concerns, or in conjunction with the former type of support for candidates that broadly agree with an interest group’s goals. Additional lobbying of candidates on specific issues may be necessary even if the candidates are friendly to the interest group overall; this may be especially true for controversial issues, or for those that diverge from the candidate’s general perspective.5 For instance, an interest group that generally supports free trade may nonetheless lobby candidates to support a protectionist policy that favors its interests. In any case, with the selection effect, supposed political inactivity around the consideration of legislation, as evidenced by a lack of public discourse and opposition, cannot be interpreted as a lack of interest group dynamics in the legislation’s outcome. Instead, this relative political tranquility or unanimity may simply sig5 Ibid., 524-525.


8

Fall 2014 litical scientists, is that political contributions influence voting behavior through heightened access to legislators, which goes on to facilitate informational lobbying.8 In this channel, the assumption is that legislators do not have perfect information about either the preferences of constituents or the impact of proposed legislation—thus, interest groups use monetary resources to gain increased access to the legislator and selectively plant information (which may or may not be accurate) for policy calculations.9 In any case, both economists and political scientists agree that interest groups play an important role in voting behavior. One point that may unify these two perspectives is the assumption that legislators value increasing the probability of their reelection.10 Interest group campaign contributions, which are perceived as improving one’s probability of reelection, therefore factor into voting behavior. Key Interest Groups in Trade Policy and Their Impact Outlining the theoretical pathways of interest group influence raises the question of which groups are most relevant in the realm of U.S. trade policy. Understanding which coalitions exercise the most power helps elucidate the form of trade policies that will arise and garner support. A review of the academic literature finds that class-based interest groups— i.e., those defined by ownership of either labor or capital—are most relevant in recent U.S. trade politics. Furthermore, the effect of labor interest group lobbying appears to be more impactful, dollar for dollar, relative to that of business interest groups. Class-Based Interest Groups: Labor and Business

nal an interest group’s success in influencing the selection of legislators.6 The Influence Effect The influence effect, comprising lobbying, bargaining, and providing information to legislators, operates through two 6 Ibid., 523.

channels. The first, described in economic models such as the Grossman–Helpman model, is that campaign contributions from interest groups “buy” public policy in a way that reduces aggregate social welfare.7 The second, as favored by po7 Robert E. Baldwin and Christopher S. Magee, “Is Trade Policy for Sale? Congressional Voting on Recent Trade Bills.” Public Choice. 105. no. Columbia Economics Review

In the past, scholars have been split on which interest groups have the most effect on U.S. trade politics. Papers published as recently as 2007 indicate that industrybased groups, divided in conflict between exporters and those who compete with foreign imports, are more important than class-based coalitions.11 An example of 1/2 (2000): 81-82. 8 Ibid. 9 Fordham and McKeown, “Selection and Influence,” 525-526. 10 Baldwin and Magee, “Is Trade Policy for Sale?” 11 Gyung-Ho Jeong, “Constituent Influence on International Trade Policy in the United States, 1987–2006.” International Studies Quarterly. 53. (2009): 519.


Fall 2014

such an industry-based conflict was the 2002 steel tariff debates, which centered around the clash between U.S. steel producers fearing foreign competition and industries that faced foreign retaliation for the U.S.’ protectionist stance. Conversely, another series of papers argues that class-based interest groups, comprising the split between labor and business interests, are the most significant. An example of class-based lobbying was the fight over the North American Free Trade Agreement (NAFTA) and the General Agreement on Tariffs and Trade (GATT) bills. However, these opposing conclusions can be explained by data selection problems with these studies. Since the late 1980s, Congress has passed more than 20 trade-related acts, leading to 163 traderelated roll call votes in the Senate and 94 in the House; nevertheless, some of the most influential studies on identifying important interest groups have not only used small numbers of votes in their datasets, but also selected different votes between them.12 Contributing to a resolution of this debate between class-based coalitions versus industry-based ones, a quantitative 2009 analysis of all congressional votes on trade legislation since 1987 finds that class-based interest groups are in fact the most significant.13 Specifically, the study uses a multilevel item-response model, which captures vote characteristics such as degree of extremity (e.g. the distinction between advocating a small tariff increase and a large one).14 Labor as the Most Important Interest Group Furthermore, examining case studies shows that although both business and labor lobbying effects are non-trivial, la12 Ibid., 520. 13 Ibid., 519. 14 Ibid., 522.

bor’s effect is likely more significant than that of business. A full information maximum likelihood analysis of the NAFTA and the GATT Uruguay Round found that while both labor and business contributions probably affected congressmen’s votes, labor’s contributions were statistically significant at the 1% level, while business contributions were only significant at the 5% and 10% level.15 Additionally, in the NAFTA vote, every $1,000 increase in a congressman’s contributions from labor groups beyond the mean level of donations reduced the likelihood of a pro-trade yes vote by 0.52%, while a similar $1,000 addition from business political action committees (PACs) above the average only increased the probability of an affirmative vote by 0.12%; for GATT, the marginal benefit of $1,000 was 0.27% and 0.05% for labor and business, respectively.16 Given that the standard deviation of campaign contributions was $61,000 for labor and $123,000 for business, this implies a large effect on voting probabilities17—multiplying the standard deviations by their respective effects per $1,000 on GATT yields a 16.5% vote likelihood swing for labor one standard deviation from the mean, and a corresponding 6.15% swing for business. Calculating the numbers for NAFTA, one finds a 31.7% and 14.8% probability difference for labor and business, respectively. Thus, though labor and business both have a significant effect, labor’s money seems to be more effective per dollar. As a result, it is likely no surprise that congressmen from districts with higher concentrations of labor interests were far less likely to vote for NAFTA—for example, a member from a 37% unionized distict was 15% less likely to vote affirmatively when compared to one from a 3% union15 Baldwin and Magee, “Is Trade Policy for Sale?” 16 Ibid. 17 Ibid. Columbia Economics Review

9 ized district.18 Another case study looking at the protectionist “Fair Practices in Automotive Product Act” (HR 5133) found via a logistic regression model that “congressmen’s decisions to vote for or against the bill were almost totally dependent on how friendly they were to labor.”19 Three other independent variables - party affiliation, unemployment rate in a given congressman’s state, and seniority - were considered, but out of these three, only unemployment rate was found significant at the 5% level, and with a very small effect relative to labor relations.20 An explanation for the disproportionate effect of labor in at least this case was the effect of the United Auto Workers, the interest group most directly affected by the decline of the U.S. automotive industry. HR 5133 became a key issue for the powerful union to rally behind, as the UAW believed that the bill would create up to 868,000 jobs in the automobile industry, countering the loss of 40% of the labor force, or about 300,000 automobile workers, since lay-offs that had started in 1980.21

“though labor and business both have a significant effect, labor’s money seems to be more effective per dollar” In a demonstration of the influence effect, UAW members conducted extensive lobbying activities, which included letter campaigns directed at congressional offices, as well as petitions signed by thousands of voters. Surprisingly Insignificant Factors in Determining Trade Policy Reinforcing the importance of interest groups in shaping trade policy votes are 18 Eric M. Uslaner, “Let the Chits Fall Where They May? Executive and Constituency Influences on Congressional Voting on NAFTA.” Legislative Studies Quarterly. 23. no. 3 (1998): 360. 19 Ikuo Kabashima and Hideo Sato, “Local Content and Congressional Politics: InterestGroup Theory and Foreign-Policy Implications.” International Studies Quarterly. 30. no. 3 (1986): 309. 20 Ibid., 307. 21 Ibid., 298.


10

Fall 2014

Voter preferences Moreover, aggregate voter preferences do not seem to be particularly important. In the logit NAFTA study, the coefficients for variables tracking jobs gained, jobs lost, and the relative fraction of jobs lost in a Congressional district were statistically insignificant,29 suggesting a combination of some three possibilities:

findings by some scholars on the relative insignificance of two other factors that would intuitively seem significant: ideology and voter preferences. Legislator ideology Naturally, some studies indicate that party and ideology account for some variation in floor voting on trade policy.22 A recent working paper by Conconi et al. (2012), for instance, finds that more conservative politicians on the DW-Nominate scale are more likely to vote for trade liberalization.23 However, there are a few issues with these studies. First, not all control for interest group campaign contributions; Conconi et al. (2012) have DW-Nominate and PAC contributions in separate regression specifications, and use non-continuous measures for those PAC contributions. Secondly, on a theoretical level, they may not account for the selection effect— after all, if party and ideology indeed explain voting patterns, what explains the party and ideology of legislators may nevertheless be interest group support, not the independent effect of ideology. That is, campaign contributions and other support mechanisms may have selected the very composition of a legislative body, determining voting patterns. Thirdly, ideology may be only a shortcut in the information gathering process of a legislator as opposed to some exogenous prejudice independently biasing voting results,24 thereby leaving room for the influence effect, which often operates 22 Fordham and McKeown, “Selection and Influence,” 522-523.

23 Paola Conconi, Giovanni Facchini, Max F. Steinhardt, and Maurizio Zanardi, “The Political Economy of Trade and Migration: Evidence from the U.S. Congress,” Working Paper. 36. 24 Jonathan C. Brooks, A. Colin Cameron, and Colin A. Carter, “Political Action Committee Contributions and U.S. Congressional Voting on Sugar Legislation.” American Journal of Agricultural Economics. 80. no. 3 (1998): 450.

through provision of policy information. Fourthly, a number of empirical analyses throw ideology’s importance into doubt. A logit model of NAFTA House and Senate votes, for example, shows that “political partisanship and ideological positions apparently had little effect on the votes,”25 with these two factors controlled for by a political party variable and a politician’s economic stance as measured by their National Journal Economic Rating (NJER).26

(i) Congressmen do not vote in their aggregate constituents’ economic interest, (ii) a relatively trivial percentage of total constituents are affected by a given trade policy, making the opinions of narrow interest groups more relevant, and/or (iii) Congressmen believe that voters will not hold them accountable for adverse or beneficial trade policy votes.

These ideological variables were never significant at less than the 10% level. In another study, findings from a simultaneous three-equation model of PAC contributions were “strong enough to overturn the conclusions drawn from single-equations studies (Vesenka, Welch) that have found ideology to be the dominant factor in explaining voting outcomes.”27 The study also reinforced the finding that money buys votes, contradicting the hypothesis of reverse causality, i.e. votes drawing money.28

The findings of this study are reinforced by low multicollinearity; summary statistics show little correlation between the economic variables.30 Elaborating on possibility (iii) is a different paper that draws on data from a survey of 36,501 potential voters in the 2006 midterm elections, finding that voters are widely ignorant of their congressperson’s trade positions, believe trade policy is not important (according to self-assessments), but also insignificant in terms of its impact on the probability that they will vote for an incumbent.31 Even when pooling together respondents who stated that trade policy was especially relevant to them, the data shows this level of self-assigned importance did not correlate with increased knowledge about trade policy votes or a free-trade/ protectionist bias either way.32 This lack of voter accountability as a function of trade policy votes is exceptional in that it applies much more to trade legislation than other policy issues, such as the minimum wage, capital gains taxes, and stem cell research funding. Specifically, given a 60% probability of a voter casting her ballot for an incumbent, the incumbent’s position on the wide-

25 In-Bong Kang and Kenneth Greene, “A Political Economic Analysis of Congressional Voting Patterns on NAFTA.” Public Choice. 98. no. 3/4 (1999): 385. 26 Ibid., 389 27 Brooks et al., “Political Action Committee Contributions,” 451. 28 Ibid.

29 Kang and Greene, “A Political Economic Analysis,” 392. 30 Kang and Greene, “A Political Economic Analysis,” 392. 31 Alexandra Guisinger, “Determining Trade Policy: Do Voters Hold Politicians Accountable?.” International Organization. 63. no. 3 (2009): 533. 32 Ibid., 544.

“ideology may be only a shortcut in the information gathering process of a legislator as opposed to some exogenous prejudice independently biasing voting results”

Columbia Economics Review


Fall 2014

ly publicized33 Central American Free Trade Agreement could account for only up to 5% of a probability shift, in contrast to 14%, 13%, and 27% changes for the aforementioned three other issues.34 These findings attack the fundamental assumptions behind some literature on trade policy, which assume perfect voter information. Case study: The U.S.-Korea Free Trade Agreement To summarize, the academic literature indicates that there are two major theoretical pathways, the selection effect and influence effect, through which interest groups can exert influence on trade policy voting. The mechanisms by which these effects operate are evidenced empirically by quantitative studies, which prominently highlight campaign contributions as a measurable and scalable indicator of an interest group’s support for a given politician and/or trade issue. Research also shows that labor and business groups are most significant, with labor being more 33 Ibid., 538. 34 Ibid., 548.

efficacious per dollar contributed. Lastly, the importance of these interest groups to trade policy is especially emphasized by findings that two other factors broadly assumed to be significant, ideology and aggregate voter preferences, are likely not so. With this framework in mind, this section of the paper conducts a probit analysis of the 2011 free trade agreement between the U.S. and the Republic of Korea (a.k.a. KORUS FTA). With regards to interest group contributions, the analysis validates the significance of labor (and also business groups, for Senators) as well as the relatively greater impact of labor campaign contributions. As for the ideology and constituent interests, I find that their significance dissipates for Senators—but not Representatives—once one controls for campaign contributions. Background KORUS FTA was originally negotiated from June 2006 to late March 2007 under the administrations of Presidents Roh Moo-hyun and George W. Bush, with a hurried three days of continuous talks leading up to Bush’s April 1 expiration Columbia Economics Review

11 of his fast track negotiation authority.35 However, with tough political headwinds both in South Korea and the U.S., neither country’s legislature ratified the agreement for a few years. Particularly, in the U.S., ratification was slowed by the objections of politicians who were concerned that greater access to Korean automobiles and textiles would damage U.S. interests.36 In response, the Obama administration reopened negotiations and insisted on substantial new concessions that would delay the reduction of U.S. tariffs on Korean auto imports.37 In addition, the administration insisted that the expansion of benefits for foreign competition-displaced workers be tied to the passage of the trade agreement.38 These moves led UAW and important auto-state legislators to endorse the final, renegotiated agreement.39 Nevertheless, a number of key labor groups such as the AFL-CIO and Teamsters opposed the bill and other trade deals under concurrent consideration, with Richard Trumka, president of the AFL-CIO, claiming the Korea pact would “destroy 159,000 U.S. jobs.”40 Furthermore, many manufacturing interests in the textile industry also opposed KORUS FTA,41 and perennial environmental concerns were salient enough that environmental protections had to be worked into the agreement.42 On the other hand, pro-business interests such as the U.S. Chamber of Commerce43 and financial services industry44 35 Evan Ramstad, “South Korea Clears U.S. Trade Deal.” Wall Street Journal, November 23, 2011. 36 Ibid. 37 Howard Schneider, “Obama, Lee outlined U.S.-Korea trade deal in Seoul, official says.” The Washington Post, December 6, 2011. 38 Binyamin Appelbaum and Jennifer Steinhauer, “Congress Ends 5-Year Standoff on Trade Deals in Rare Accord.” The New York Times, October 12, 2011. 39 Schenider, “Obama, Lee outline U.S.-Korea trade deal.” 40 Lori Montgomery and Zachary A. Goldfarb. “Obama gets win as Congress passes free-trade agreements.” The Washington Post, December 12, 2011. 41 Binyamin Appelbaum, “Textile Makers Fight to Be Heard on South Korea Trade Pact.” The New York Times, October 11, 2011. 42 William H. Cooper, Mark E. Manyin, Remy Jurenas, and Michaela D. Platzer. The U.S.South Korea Free Trade Agreement (KORUS FTA): Provisions and Implications. CRS Report RL34330 (Washington, DC: Library of Congress, Congressional Research Service, March 7, 2013). 43 Montgomery and Goldfarb. “Obama gets win.” The Washington Post, December 12, 2011. 44 U.S. International Trade Commission, U.S.-Korea Free Trade Agreement: Potential


12

Fall 2014 tive’s district was in 2012 (as measured by the Cook Political Report). In order to clarify the magnitude and distribution of campaign contributions, note the summary statistics tables below. They outline campaign financing from four interest group sectors, as defined by OpenSecrets: (i) Labor, (ii) Finance, Insurance, & Real Estate, (iii) Environment, and (iv) Miscellaneous Business. As expected, we see that Senators, being proportionately more powerful, receive more campaign financing on average than Representatives—with the distinct exception of labor interest group contributions, which are greater for Representatives than Senators in both the 2008 and 2010 cycles. In addition, despite lower average contributions, 2008 cycle labor contributions to Senators had a higher standard deviation, possibly suggesting either more extreme sentiments toward labor in the Senate or a broader distribution of how malleable Senators are to campaign financing relative to the House. Note that contributions can be negative due to a few reasons, including: (i) some Members return contributions out of political motivation; (ii) some Members donate funds to charity, which can be notoriously difficult to track;

supported the bill. In particular, in the view of the U.S.-Korea FTA Business Coalition, a broad-based group of over 400 U.S. companies, trade associations, and business organizations, KORUS FTA was the “most commercially significant U.S. trade agreement in over a decade.”45 In the end, KORUS FTA passed 278-151 in the House (with five Representatives not voting)46 and 83-15 in the Senate (with two Senators not voting)47. Economy-wide and Selected Effects. By Daniel R. Pearson, Shara L. Aranoff, Deanna Tanner Okun, Charlotte R. Lane,
Irving A. Williamson,
Dean A. Pinkert. Investigation No. TA2104-24. Washington, D.C., 2010., 4-10. 45 Ibid., 7-31. 46 The New York Times, “House Vote 783 - Passes U.S.-Korean Trade Agreement.” Last modified October 12, 2011. Accessed December 9, 2013. http://politics.nytimes.com/congress/votes/112/ house/1/783. 47 The New York Times, “ Senate Vote 161 - Passes U.S.-Korean Trade Agreement.” Last modified October 12, 2011. Accessed December 9, 2013. http://politics.nytimes.com/congress/votes/112/senate/1/161.

Methodology and Data To evaluate the effect of interest groups, I employed a probit regression model48 that took ideology, campaign contributions, the certain constituency characteristics, and electoral measures as explanatory variables. In particular, ideology was measured with DW-nominate; campaign contributions were sourced from OpenSecrets.org for the two election cycles before the 2011 KORUS FTA vote; constituency characteristics are the percent of a state or congressional district employed in two major industries possibly affected by KORUS FTA, finance & insurance and manufacturing, as defined by the 2011 American Community Survey 5-year estimates; and electoral measures included whether a Senator was up for election in 2012 and how competitive a Representa48 In the interest of increased clarity, I also tried a linear probability model using OLS with robust standard errors, but the sign and significance of its coefficients rarely matched those of a probit or logit model. Columbia Economics Review

(iii) campaign accounting and bookkeeping errors; (iv) and returned contributions due to rules violations, such as when donations exceed contribution limits or were made after an election cycle had expired.49 Altogether, in many of these cases, these non-donations appear in federal campaign finance reports as negative dollar amounts. I performed regressions in two specifications: first without electoral effects, and second with electoral effects. In the tables without electoral effects, I look at three regressions that are numbered as follows: (1) the influence of only ideology and constituency interests, (2) the effect of ideology, constituency interests, and 2010 financial contributions, and 49 Megan R. Wilson, OpenSecrets.org, “Rejected, Donated or Lost, Sometimes Politicians Never Pocket the Oil Money Directed at Them.” Last modified August 27, 2010. Accessed December 7, 2013. http://www.opensecrets.org/news/2010/08/ when-the-deepwater-horizon-oil.html.


Fall 2014

13

(3) the effect of ideology, constituency interests, and a discounted sum of 2008 and 2010 campaign contributions. In the tables including electoral effects, I look at three regressions that are numbered as follows: (1) the interaction of ideology with an electoral variable, (2) the interaction of constituency interests with an electoral variable, and (3) the interaction of 2010 financial contributions with an electoral variable. In all these cases, interest group contributions are expressed in natural log amounts. In the above regression table, we see that the effect of ideology (dw_nominate) becomes less significant as one takes into account financial campaign contributions. In fact, the variance explained by dw_nominate becomes completely subsumed by labor and business contributions from the past two cycles before the KORUS vote (2008 and 2010), as seen in column (3). This suggests that it is money, not ideology or constituent characteristics, that drove much of the yes or no vote on KORUS FTA. In addition, we see that, as expected from literature review, the coefficient of labor contributions has a greater absolute value than that of all other contributions, including those from business. This set of regressions explores the interaction effects of whether a Senator would go on to run for reelection in 2012 (elections), the year immediately following the KORUS vote. One may have expected that an impending election would make ideology, constituency characteristics, and/or campaign contributions more salient, but in fact we find that there are few statistically significant effects. Ideology is significant at the 1% level in columns (1) and (2), and the elections dummy is only weakly significant and negative in column 2—suggesting that senators from both sides of the aisle are perhaps less likely to vote for free trade agreements when an election approaches. However, none of the interaction variables are significant, and the significance of the elections dummy variable disappears once adding the interaction of campaign contributions and donations. Interestingly, however, column (3) indicates that with interaction effects, no variables save for the percent of a Senator’s state employed in the financial or insur-

ance industry (which variable was this?) were significant to the KORUS FTA vote. Even ideology goes from 1% significance to a p-value of 0.181. This echoes the results of the previous regression table that looked at financial contributions alone (both discounted and not discounted), where 2010 contributions were weakly significant but nevertheless explained much of the variance of DW-nominate.

“it is money, not ideology or constituent characteristics, that drove much of the yes or no vote on KORUS FTA” As for the House, I replicated the analysis of financial campaign contributions as performed on the Senate. We find that unlike senators, representatives are responsive to ideology and the percent of their constituents employed in manufacturing in all three regressions. This holds true at the 1% significance level. Out of our set of campaign contributions, only Columbia Economics Review

those from labor are statistically significant, and even then merely at the 10% level. This finding may suggest that in trade policy votes, representatives are both more ideological and responsive to their constituents at large as opposed to specific interest groups. This result is not entirely intuitive, as one could have expected that representatives, who face more frequent electoral fights, would more highly value the campaign support of particular vested interests as rather than take a broader view of their constituency at large. But instead, we find that the opposite is true—more frequent elections may make them more accountable. In addition, the high significance of ideology is in line with the House’s more partisan, polarized nature in comparison to the Senate. The electoral data that was available for the House was sourced from the Cook Political Report on the forecasted results of the 2012 elections. The competitiveness of a district is represented as competitiveness, where competitiveness is calculated as absolute distance from electoral prospect extremes, i.e. 100% probability of reelection or 0% probability of reelection, as evaluated by Cook Political Report. Competitiveness is therefore bounded on 0 to


14

Fall 2014 for KORUS. This could reflect a confidence from such Representatives that in tough election fights, tacking towards pro-free trade would burnish their electoral prospects while still maintaining solid levels of labor support. Labor contributions aside, another significant effect appears with regard to environmental interest groups. Although contributions in the lack of electoral competitiveness had no effect, the interaction variable is weakly and negatively significant, indicating that Representatives facing electoral pressure likely reacted to the political sensitivities surrounding KORUS FTA’s environmental impact.

“Representatives in tougher reelection fights were more likely to vote for KORUS”

0.5, with values closer to 0.5 representing toss-up districts where the incumbent had only a 50% chance of winning. Note that the number of observations drops to 349, likely due to the fact that not all Representatives who voted on KORUS ended up running for reelection in 2012. As the analysis shows, electoral competitiveness had some effect on Representatives’ KORUS voting decision,

particularly with regard to labor contributions. Oddly enough, although labor contributions were negatively correlated with a Representative’s probability of voting for KORUS, there is also a positive coefficient on the interaction of labor contributions and district competitiveness. That is, given some amount of money from labor, Representatives in tougher reelection fights were more likely to vote

To see the accuracy of the model and the approximate impact of campaign contributions, we can compare the actual vote count in both chambers with the predictions of the model. Let a predicted yes vote be when the probability that a Member votes yes is greater than ½. Using the coefficients generated by the regressions including discounted financial contributions and no electoral effects, we find the data displayed in Exhibit 7. In terms of accuracy, the model underestimates the number of yes votes in the House by 20, and overestimates the number of yes votes in the Senate by 10. In terms of the net effect of contributions, the model predicts that financial contributions in the House shifted 74 yes votes into no votes, and seven yes votes into no votes in the Senate. None of these shifts would have directly changed the outcome of the bill in Congress. This raises the question: if labor contributions were more statistically significant and impactful dollar for dollar relative to other spending, why did KORUS-FTA pass by the margins it did? Other than the possibility that labor simply did not give enough money, a couple other ideas come to mind. First, after years of delays borne out of labor concerns, the renegotiation of KORUS-FTA such that it garnered some union support softened labor’s aggregate opposition to the bill. As a result, some politicians with significant campaign contributions from

Columbia Economics Review


Fall 2014

15 statistically significant and strong positive effect on the probability of an affirmative vote (it however had the opposite effect on Republicans).51 Conclusion

labor interest groups may have felt less pressure to vote against the bill. There is also the possibility that certain politicians with relatively large amounts of support from pro-bill unions (e.g. senators from auto-making states where UAW was powerful) would have switched their position to strongly support the bill after UAW endorsed it. If these key senators were politically powerful, that could have further swayed the vote in KORUSFTA’s favor. Second, as mentioned earlier, the White House played a direct role in the negotiations and ensured that the bill was not considered by legislators in a vacuum, but on the condition that aid for displaced workers be tied to the passage of the bill. Legislators may have thus felt that the intensity of interest group disapproval could be reduced or negated by

the institution of these separate benefits.

“The white house ensured that the bill was not considered by legislators in a vacuum” Additionally, the very fact that the White House lobbied hard for the bill’s passage, especially in the context of economic recovery following the Great Recession50 may have favorably affected legislative attitudes toward KORUS-FTA. This possibility is reinforced by a study of NAFTA that found that Clinton’s outreach to legislators of his own party had a 50 Montgomery and Goldfarb. “Obama gets win.”

Columbia Economics Review

Thus, self-evidently, the importance of interest groups, especially labor, does not somehow mean that free-trade agreements cannot make their way through Congress. However, this paper’s quantitative analysis has reinforced other findings demonstrating the impact of classbased interest groups, showing that labor and business continue to play a significant role in U.S. policymaking on international trade. Moreover, this paper validates the claim by a sweeping study of trade policy votes from 1987-2006 that “compensating labor—the loser of free trade in the classbased model—is necessary for successful trade liberalization.”52 Along these lines, Congress ratified KORUS-FTA only after extensive concessions that even enticed some powerful labor interest groups to support the bill. All in all, free trade is empirically advantageous to economic development,53 receiving practically universal support from economists. 54 KORUS-FTA alone, even in its altered form, has been estimated to increase U.S. GDP by $10.1-$11.9 billion, or approximately 0.1%—not an incredibly large amount, but not inconsequential, especially in light of how GDP growth compounds. Though KORUS-FTA eventually passed, the almost six years of interest group wrangling between conclusion of initial negotiations and eventual ratification meant that the U.S., South Korea, and the rest of the interconnected global economy was not able to reap economic growth due to obstacles in the political process. Though navigating distributional and ethical concerns is not straightforward, by better understanding the mechanisms that drive trade policy, policymakers and voters can hopefully learn how to adjust to political pressures and act in the interest of societal welfare. n

51 Uslaner, “Let the Chits Fall Where They May?,” 358. 52 Jeong, “Constituent Influence,” 537. 53 Francisco Rodriguez and Dani Rodrik. Trade Policy and Economic Growth: A Skeptic›s Guide to the CrossNational Evidence. NBER Macroeconomics Annual 2000, Volume 15. Edited by Ben S. Bernanke and Kenneth Rogoff. MIT Press, 2001.261-262. 54 William Poole, “Free Trade: Why Are Economists and Noneconomists So Far Apart?.” Federal Reserve Bank of St. Louis Review. 86. no. 5 (2004): 1-2.


16

Fall 2014

Once You Go Black Market... Official and Black Market Exchange Rates in Latin America Joel L. Phillips Columbia University

Argentina;s debt default earlier this year was a major event in the financial world. As often happens with such disasters, economists suddenly became interestd in every aspect of the Argentinian economy, as well as its Latin American neighbors. Among the recent literature on Latin American economies, this article stood out. Its topic is unique and compelling (who doesn’t like readinga about black markets?), and also addresses an issue that has affected Argentina since at least its 2002 debt crisi; namely, the appropriateness of its official exchange rate. Beyond the relevance of his choice in topic, Phillips does an excellent job of laying out his model in a readable manner, and his theorizing is spot-on. Overall, the paper adds to existing literature, and raises serious questions about the best next steps for the states he considers. -V.S.

The presence of black or parallel currency markets is not a new phenomenon. Many emerging economies have had these unofficial, often illegal markets for decades, and Latin American countries have been no exception. Mexico, Chile, Brazil, Argentina, and Venezuela each have had active parallel exchange markets in past decades as they struggled to find a balance between capital controls, floating versus fixed exchange rates, and severe inflation. These black markets appear in foreign currency markets because of strict capital controls, aggressive monetary policy, and chronic high inflation in currency. In an effort to control skyrocketing inflation, a government may attempt to curb

demand for more stable foreign currency by severely limiting access to it. However, history has shown us that demand is not typically constrained by these restrictions. The individuals seeking foreign currency for a stable place to park their savings and the firms that need dollars for international transactions each have a strong incentive to look to other markets to meet their currency needs. Thus, demand in excess of the official supply brings about parallel illegal markets, and the currency supply of these matkets are funded by tourism and over/under invoicing. In theory, the black market rate should be close to a true open-market floating rate because supply and demand are not

Columbia Economics Review

regulated. In contrast, exchange rates in official currency markets are often pegged or have some form of capital control in place that impedes free movement. The presence of a black market premium, or the difference between black market and official rates, signals a misalignment in rates and may cause leaders to reconsider current monetary policy. A number of interesting questions arise from the parallel market phenomenon and a sizable amount of literature exists on the subject. Recent literature has shown that PPP performs better with black market rates and Diamandis (2003) looked into the validity of purchasing power parity (PPP) in the emerging markets of Brazil, Argentina, Chile, and Mex-


Fall 2014

17

ico, where well-established parallel markets existed during the 1970s to early 90s. His conclusion was that PPP is indeed maximized when black market exchange rates are applied. This paper will examine research on the satisfactory rating of purchasing power parity (PPP) tests using data from Argentina and Venezuela to examine the relationship between the official and black markets. The outcome of my regression tests using this recent data support the results that others economists have come to: black markets rates do indeed give better results than official rates for PPP. Data from Argentina and Venezuela were chosen, as these two countries that have active parallel markets, one with a floating and the other with a fixed rate. In general, a challenge in this research is in securing complete and consistent historical black market exchange rate data, since, by their very nature, these rates are not official.

“In theory, the black market rate should be close to a true open-market floating rate because supply and demand are not regulated.”

However, in Argentina, the black market or “blue” rate is so widely used that it is published daily in newspapers and online, despite the fact that the government has gone to great lengths to prevent the purchase of U.S. dollars. Similarly, there are websites and other sources of exchange rate information available in Venezuela. Although there exists some doubt regarding the accuracy of the reported data, I still make the general conclusion that PPP does hold better using black market rates. Research with Recent Data This paper examines data from Argentina from January 2010 until March 2014 and for Venezuela starting in June 2010 through March of 2014 as these two countries have shown significant activity in parallel markets in recent years and black market premiums have soared. CPI data was taken from Bloomberg and countryeconomy.com, a website that aggregates data from various sources, while

data on exchange rates were taken from the International Monetary Fund and Bloomberg. Figures 1 and 2 show the month-overmonth black market premium as a percentage of the official rates in Argentina and Venezuela during this time period. The data shows that the premium varies greatly. While black market rates respond to shocks and increased demand in the market for foreign currency, official rates are either controlled (Venezuela) or limited in access (Argentina), so their responses to changes in demand are slower or nonexistent in the short run. To examine long-term relationships, a larger data set would be necessary. To examine how each market rate perColumbia Economics Review

forms in relation to purchasing power parity (PPP) tests, I used the following formula to run regressions and test each market’s performance in PPP equilibrium: Log S= α+ β Log(P/P^* )+ E In the equation, S is the local exchange rate, P is the price level for Argentina or Venezuela and P* is the U.S. price level (using CPI data). In theory, β should equal 1, and α should be equal to 0 in order for PPP to exist. Cointegration between markets is essential and assumed in the research. Tables 1 and 2 show the results from the regressions. For Argentina, β is much closer to 1 when black market rates are used, as compared to official rates. The r-


18

Fall 2014 There is room for error in these calculations, mostly due to limited and often inaccurate data. For example, in February of 2013, the IMF released a declaration of censure for Argentina, citing tainted price reporting. The data I used, however, was the official IMF data during this same period. Despite these concerns, the data still show the trend that was expected and confirm that black market rates do a better job at satisfying PPP. Conclusion By using recent data from Argentina and Venezuela and assuming cointegration, I was able to reproduce some of the same conclusions that other economists have reached: that PPP holds more strongly when black market data is used. However, black markets may not necessarily represent a true market-driven floating rate, sincer there seems to be an additional premium, as PPP did not hold perfectly. This premium probably accounts for the risk undertaken by trading in the illegal market. This, however, is wholly dependent on the level of enforcement from thelocal government. Another factor that should be taken into account is the lack of free-flowing and aggregated information about black markets. n

squared for the black market regression is slightly higher as well. In the case of Venezuela, the official rate yielded a β of .8865 whereas the coefficient using the

black market date was 2.6721. Still, the rsquared for the black market was considerably higher for Venezuela. In all cases the betas were significant at the 5% level.

Columbia Economics Review


Fall 2014

19

Into the (Bretton) Woods The Motivation and Collapse of the Bretton Woods System Sunpreet Singh Columbia University In the wake of the Great Depression, in the midst of World War II, the Allied nations gathered in Bretton Woods, N.H. to devise a system that would promote international trade and economic stability. This led to the creation of the now-defunct Bretton Woods system, the first effort among nations to collaborate their monetary policies, and to the establishment of the IMF and the IBRD. This paper dissects the agreement and its aftermath in order to pinpoint the weaknesses that ultimately led to its downfall. By examining the structure of the system and the subsequent actions of the member nations, the author shows that the ultimate collapse of the system was inevitable, and that its initial assumption—that international cooperation would trump national self-interest, was false. The significance behind the Bretton Woods problem extends far beyond the system’s collapse. After its demise, the European countries established the European Monetary System, which then led to the European Monetary Union, and shared currency. Since the 2008 financial crisis, we have witnessed tensions between nations linked by an artificial monetary system. By examining the analogous Bretton Woods system, we can discover which characteristics help and hinder such a system, and in so doing create a more prosperous and united global community. – E.J.N.

The Bretton Woods system is a negotiated system of monetary management between states that sought to strengthen and give order to the world of currency and international finance. Developed at the Bretton Woods Conference in 1944, the system was engineered in a climate heavily influenced by the Great Depression and World War II. Its goals were to establish a new system of international finance geared towards promoting international commerce and cooperation. However, the system had collapsed by 1971, marked by the infamous “Nixon shock.” This paper will focus on the development of the Bretton Woods system and isolate the reasons for its collapse; starting with a description of the system as it was designed at the Bretton Woods Conference, the paper will examine the motivations behind the design by looking to the historical circumstances and interest of the

US policymakers, both of which played a significant role in the system’s construction. Following this, the paper will examine the “Nixon shock” and explain the factors that contributed to the collapse of the Bretton Woods system. The paper concludes that while on the surface the system seemed successful in its goals, its short-sighted design and its shaky foundation on international cooperation made it doomed to collapse. The Bretton Woods system can be examined in three parts: 1) the establishment of the dollar-gold standard through the International Monetary Fund (IMF), 2) the development of a regulatory system around the IMF, and 3) the creation of the International Bank for Reconstruction and Development (IBRD). The dollargold standard has its basis in the Articles of Agreement of the International Monetary Fund, Article IV: “The par value Columbia Economics Review

of the currency of each member shall be expressed in terms of gold as a common denominator or in terms of the United States dollar of the weight and fineness in effect on July 1, 1944.”1 This meant that the nominal value of each currency was to be pegged to gold or the US dollar, which due to the Gold Reserve Act, was fixed at $35 per ounce of gold. The dollar quickly cemented its central role in this system for a number of reasons. Initially the introduction of the dollar into the text of the agreements was due to the desire to avoid the pitfalls of the gold standard by being able to adjust fluctuations through the manipulation of currency exchange 1 U.S. Department of State, United Nations Monetary and Financial Conference: Bretton Woods, New Hampshire, July 1 to July 22, 1944 : final act and related documents (Washington: Government Printing Office, 1944), 31.


20

Fall 2014

rates.2 The stability of the dollar and its full convertibility with gold made it the prime candidate to provide the relative standard upon which exchange could be based. In making the dollar the currency which others would be pegged against, the dollar adopted the role of an international currency. Countries would prefer to stock their reserves with dollars instead of gold, since dollars were considered liquid and fully convertible amongst currencies and gold. In addition, dollars would serve the same function as gold in terms of backing currencies since dollars were backed by the United States’ gold reserves, which comprised the majority of world gold reserves, and were guaranteed at a fixed rate of gold convertibility.3 The compounding of all these benefits established the dollar-gold system in which countries would back their currencies primarily on the dollar which in turn was backed upon gold. While the establishment of the dollargold system was substantial in the design of a new monetary system, the development of a regulated framework around this system was seen as a necessary complement. Under Article IV, each state would have their currency pegged at a specific par value–restricting the value at which they could buy or sell gold–and would maintain that par value within a one percent margin of parity with gold or the dollar when dealing with exchange transactions.4 They were expected to do so by participating in the foreign exchange market, buying and selling gold or currencies to balance their international transactions.5 The par values, or pegs, were meant to be fixed, adjusted in the case that a state proposed a change in order to correct a “fundamental disequilibrium.” The IMF would evaluate the state proposal based on the extent of the proposed change (i.e. a change under ten percent would receive no objection) and the validity of the state’s claim that the change would be necessary to correct a “fundamental disequilibrium”, a situation that was not clearly defined and open to exploitation by states. Beyond this, a state would be able to change its par value upon uniform

changes in par values of countries or if their change would not affect international transactions of other states.6 In addition to these regulatory measures, the IMF maintained standards that sought to facilitate monetary transactions. Article V of the Articles of Agreements detailed the fund aspect of the IMF that required nations to meet a quota of currency and gold contributions. Countries were granted the privilege to purchase or sell currencies from this fund, which would assist in balancing payment disequilibriums. This privilege was subject to regulation to ensure healthy reserves of currencies and revoking based on whether the IMF saw the state as using the IMF’s resources to act against the interests of the IMF.7 Such actions, outlined in the obligations of member states under Article VIII, would include discriminatory currency practices and multiple currency arrangements, which would be aimed to harm specific states. Another major obligation under Article VIII was the establishment of full currency convertibility for each state, allowing increased commerce by creating a world where states would freely exchange currencies and gold with one another.8 While the laying out of a heavy regulatory framework would make it seem as if the IMF could ensure a stable international monetary system, it suffered weak enforcement and limitations in intervening with domestic policies that limited its ability to prevent the collapse of the Bretton Woods system. The last major part of the Bretton Woods system was the creation of the International Bank for Reconstruction and Development. This institution was designed to provide loans that would help restore economies ravaged by war and promote their stable reemergence into the global economy through forgiving interest rates and loan terms.9 The IBRD and IMF, as described, create a detailed picture of what policymakers of the time designed to be their new international monetary system. In analyzing this design one can draw forth the motivations of policymakers and understand how specific parts of the system sought to manifest those motivations.

The motivations behind the Bretton Woods system were largely consequences of the times from which they arose. At the time of the conference the world was coming out of World War II. The war encouraged nations to band together to prevent future conflicts and establish institutions, such as the United Nations, that would ensure international cooperation and work to maintain peace. Henry Morgenthau Jr., the United States’ Secretary of the Treasury and President of the Bretton Woods Conference, expressed the common belief that such a cooperation and peace would be contingent on the establishment of an economic system designed towards these goals.10 In his opening address to the Bretton Woods Conference, Morgenthau explains the economic interconnectedness of the world using the example of the Great Depression. He saw the currency disorders that spread during this time as the instigators of a downward spiral going from the collapse of international trade and investment to conditions like high unemployment and political vulnerability, ultimately leading to the rise of belligerent dictators and the onset of war.

2 Emanuel A. Goldenweiser and Alice Bourneuf, “Bretton Woods Agreement,” Federal Reserve Bulletin (September 1944), 2. 3 Ronald I. McKinnon, The unloved dollar standard: from Bretton Woods to the rise of China, (New York, NY: Oxford University Press, 2013), 38. 4 U.S. Department of State, United Nations Monetary and Financial Conference, 31 5 Goldenweiser and Bourneuf, “Bretton Woods Agreement”, 3

6 U.S. Department of State, “United Nations Monetary and Financial Conference”, 32 7 Ibid, 34 8 Ibid, 39-41 9 Chamber of Commerce of the United States of America, Taxation and Finance Department, Bretton Woods program: International Monetary Fund and International Bank for Reconstruction and Development; report of Finance Department Committee (Washington, 1945), 8.

10 Henry Morgenthau Jr, “Bretton Woods and International Cooperation,” Foreign Affairs 23 (2), (January 1945), 183 11 Henry Morgenthau Jr., “Address by the Honorable Henry Morgenthau, Jr. at the Inaugural Plenary Session July 1, 1944”, United Nations Monetary and Financial Conference: Bretton Woods, New Hampshire, July 1 to July 22, 1944 : final act and related documents (Washington: Government Printing Office, 1944), 4-5.

Columbia Economics Review

“As countries sought to

expand their economic well-being at the expense of others, economic aggression grew.” Fierce economic competition for world trade led to policies such as the competitive depreciation of currency and the limiting of free movement of goods. As countries sought to expand their economic well-being at the expense of others, economic aggression grew. For Morgenthau, “economic aggression can have no other offspring than war. It is as dangerous as it is futile.” 11 The new system would have to be one in which states worked in collaboration and avoided acts of economic aggression towards one an-


Fall 2014 other. Looking closer to Morgenthau’s explanation of the economic path to war, one can piece together the objectives for the Bretton Woods Conference. The central objective would be to establish an environment conducive to international trade and investment. Thus, the well-being of each nation would be maximized while avoiding conflicts that would lead to war. The key element of this would be establishing a system that avoided currency disorders and thereby facilitating trade and investment.12 Despite this claim of promoting international welfare, it was made clear that it was believed that international welfare would simultaneously promote the domestic welfare of states. As Morgenthau explained, “The American delegation, which I have had the honor of leading, has at all times been conscious of its primary obligation— the protection of American interests. And the other representatives here have been no less loyal or devoted to the welfare of their own people. Yet none of us has found any incompatibility between devotion to our own countries and joint action. Indeed, we have found on the contrary that the only genuine safeguard for our national interests lies in international cooperation.”13 The collapse of the Bretton Woods system was founded upon the collapse of this assumption that mutual welfare would be guaranteed by international cooperation. Monetary stability and the breakdown of barriers were seen as key to promoting international cooperation. These two goals were the pillars upon which the Bretton Woods system was designed. In creating the dollar-gold standard, policymakers recognized the usefulness of gold as acting as the foundation of a system of currency, but knew that in order to avoid the same pitfalls that led to the Great Depression they would need a system that would stabilize exchange. The introduction of pegging against the dollar was done to promote exchange stability and have the ability to make adjustments, something that was difficult to do under the gold standard alone. The regulations on establishing par values, the limiting of change with the introduction of margins, and the need for IMF approval for larger changes also reflected this goal for exchange stability. IMF approval for larger changes, as well as policies against competitive exchange depreciation and multiple currency ar-

rangements were put in place to limit tactics of economic aggression so that states would be limited in their abilities to harm other states; the interconnectedness between states would make such tactics harmful to the stability of the entire system. Despite these measures to produce stability, it was clear that stability was not the main goal. As Goldenweiser and Bourneuf put it, “Stability, however, is viewed not as an end in itself but as a means of promoting trade, and, through trade, a high level of employment and income.14 The domestic welfare of states, through employment and high incomes, were the paramount, and the entire Bretton Woods system would only be useful if it met those goals.

12 Ibid, 8-10. 13 Ibid, 8.

14 Goldenweiser and Bourneuf, “Bretton Woods Agreement”, 2-3

“The domestic welfare of states... were the paramount, and the entire Bretton Woods system would only be useful if it met those goals. ” The “Nixon shock” made it abundantly clear that this system, which was lauded as a pillar of international cooperation, was rooted in self-interest that worked against the design and ultimately contributed to its collapse. In 1971, President Richard Nixon addressed the nation on the development of a new economic policy. The goals of this policy were to target unemployment, inflation, and the strength of the dollar. It is interesting to note the breakdown in international concern in that these goals were very similar to those that Morgenthau set at the Bretton Woods Conference, but simply scaled down to serve the United States rather than world. The domestic pressures of unemployment and inflation, which Nixon saw as closely related to the strength of the dollar, were the impetus to make large changes to American monetary policy. The two actions taken were the suspending of the convertibility of the dollar into gold and the imposition of a ten percent tax on imports, both of which would not be lifted until other countries shifted their exchange rates and appreciated their currencies. These actions were aimed at international money speculators

Columbia Economics Review

21 and against foreign competition. Nixon claimed that “speculators have been waging an all-out war on the American dollar” and that the suspension of convertibility would be “the action necessary to defend the dollar against the speculators.” Nixon’s goal was to stabilize the dollar. However, the objective came at the expense of problematizing the international monetary regime, which was centered on the dollar’s convertibility. In the imposition of the tariff on imports, Nixon sought “to protect the dollar, to improve our balance of payments, and to increase jobs for Americans.” By having other nations appreciate their currency, rather than devaluing the dollar, Nixon intended that “the unfair edge that some of our foreign competition has will be removed.” To Nixon there was a threat to America’s competitiveness in global trade and the current system was outdated. It was established to help rebuild weak post World War II economies. However, these weak economies, like Germany and Japan, grew to be strong competitors yet still reaped the benefits of devalued currencies. By making the changes he outlined in his address, Nixon, who blamed a lack of fair competition as the reason why the United States’ “trade balance has eroded over the past 15 years”, wanted for “exchange rates to be set straight and for the major nations to compete as equals.” Nixon’s drastic changes were clearly made due to the slipping of American interests in the Bretton Woods system, which ultimately led to its collapse.15 While the system was designed for international cooperation, which was believed to serve the mutual interest of the participant states, it only worked insofar as mutual benefit did exist for the participating states. Once a state, especially the United States that was at the center of the system, no longer experienced a benefit from the system, the system would begin to fail as it would begin to pursue its better interest. While this was a major factor that led to the system breaking down, it was only one of the many structural failings of the Bretton Woods system that doomed it for collapse. Under an analysis of nine variables (rate of inflation, real per capita growth, money growth, short- and long-term nominal interest rates, short- and longterm real interest rates, and the absolute rates of change of nominal and real 15 Richard M. Nixon, “Address to the Nation Outlining a New Economic Policy: “The Challenge of Peace,” August 15, 1971, The American Presidency Project. <http://www.presidency.ucsb. edu/ws/?pid=3115>


22 exchange rates), the Bretton Woods regime exhibited the best overall macro performance in comparison to the gold standard (1881-1913), interwar partially backed gold standard (1919-38), and the float exchange (1974-1989).16 Despite this apparent success in maintaining stability and commerce, as it was designed to do, the shortsighted architecture of the Bretton Woods system made it prone to collapse. One issue was the allowing of changes to parity for “fundamental disequilibrium”. The concerns of the Finance Department Committee rung true, because there was no definition for such an event which would (and did) lead to the abuse of this feature.17 In addition, there were concerns that any country is free to generate “fundamental disequilibria” on its own by pursuing expansionary monetary policies in support of budget laxity.18 This would be troubling as the structure that existed to ensure stability amongst exchange rates was itself subject to manipulation and abuse. Things were made worse by the implementation of the Articles of Agreement by the IMF. On the part of the IMF “considerable effort was expended on making it clear that the ‘fundamental disequilibrium’ requirement was not really a limitation on prompt and small exchange rate changes” since they believed these changes were necessary to the stability of the system.19 However, the IMF failed in terms of enforcing its regulations upon the world, allowing for a perversion of the system designed at the Bretton Woods Conference. States would change their exchange rates and par values without consulting the IMF, or would not declare a par value at all choosing to float, while still maintaining full access to the IMF’s resources. In addition, many states did not work to make their currencies convertible or maintain the one percent margin around their par value.20 This weakening of the IMF allowed for further dependence on 16 Michael D. Bordo, “The Bretton Woods International Monetary System: A Historical Overview,” A Retrospective on the Bretton Woods system: lessons for international monetary reform. (Chicago: University of Chicago Press, 1993), 27. 17 Chamber of Commerce, Bretton Woods program 18 Alberto Giovannini, “Bretton Woods and Its Precursors,” A Retrospective on the Bretton Woods system: lessons for international monetary reform. (Chicago: University of Chicago Press, 1993), 123. 19 Kathryn M. Dominguez, “The Role of International Organizations in the Bretton Woods System,” A Retrospective on the Bretton Woods system: lessons for international monetary reform. (Chicago: University of Chicago Press, 1993), 378. 20 Ibid, 379-80.

Fall 2014 the dollar and a weakening of the regulatory structures that were set in place, ensuring the longevity and stability of the system without straining any one currency. The lack of enforcement meant that even if the system were designed perfectly, issues would still arise. Another problem that the ability to change parities by “fundamental disequilibrium” brought about was the issue of speculators, which Nixon argued hurt the stability of the dollar and acted as a major factor to the “Nixon shock.” In allowing changes in parity on subject to approval of the IMF, speculators would take heed and position themselves to profit since, “if the currency is devalued, they win; if it is not, they lose only the interest (if any) on the speculative funds.”21

“while the Bretton Woods system allowed for a period of general stability, it was doomed to be short-lived. ” Even without the structure of the IMF, speculators would be keen on shifts on gold or currencies, allowing them to relay those changes to how they believed the dollar would react. Since the dollar was fixed in terms of gold and would adjust usually in terms of increasing its money supply to meet the increasing demand, speculators would often be able to make bets and profit off the dollar, despite underlying issues.22 These issues were also fundamental to the Bretton Woods system and as they became more apparent, opened the door to more dangerous speculation. The primary issue was the shrinking of the United States’ monetary gold stock and the increase of dollar claims that emerged. At the onset of the agreement, the United States owned the majority of the world’s gold reserves. In 1951, the United States held $22.9 billion of the total official gold reserves of the IMF, which amounted to $33.5 billion. In 1968, the United States only held $10.9 billion of a total $38.7 billion. This was a result of both the massive amounts of aid that the United States supplied in the form of dollars, due to the weakness of the IBRD, and the fact that states would purchase 21 Bordo, “The Bretton Woods International Monetary System”, 45. 22 Ibid, Columbia Economics Review

dollars as they grew to adjust their exchange rates. In some cases, like that of France, states looked to exchange the dollars for gold.23 This created a system grounded on weak confidence since as there was an influx of dollars as the world economies grew and sought to back their currencies, the number of dollar claims that the United States could not back in gold grew. This was compounded by the domestic policies of the United States, which faced increased levels of unemployment and inflation close to the “Nixon shock”. Policies, such as an increase in social welfare and wartime expenditures, were largely funded by the printing of the dollar which created further inflation. 24 Therefore there would be an almost constant fear of a run on the dollar, which would incite speculators and cause doubts on the strength of the dollar for United States. In establishing the Bretton Woods system, the architects failed to recognize that the growth in world economies would create a strain on the United States’ gold reserve and the strength of the dollar–a major reason behind the “Nixon shock” and collapse of the Bretton Woods system. In conclusion, while the Bretton Woods system allowed for a period of general stability, it was doomed to be short-lived. The motivations of the system were centered around promoting the welfare of individual states through international cooperation. However, the IMF’s lack of enforcement on its regulatory framework and the reliance on the dollar, while it was subject to constraints like its fixed exchange rate with gold, made international cooperation and state welfare hard to compromise, especially for the United States. The growth of the world economies, the shift in the holdings of gold reserves, and the ensuing issues with the confidence of the dollar were results of the Bretton Woods system not evolving with the world around it. Ultimately the prevalence of state interests over international cooperation and the over dependence on a gold backed dollar made the collapse of the Bretton Woods system, “one of the most accurately and generally predicted of major economic events”.25 n 23 McKinnon, The unloved dollar standard, 44. 24 Peter M. Garber, “The collapse of the Bretton Woods fixed exchange rate system,”A Retrospective on the Bretton Woods system: lessons for international monetary reform. (Chicago: University of Chicago Press, 1993), 470-6. 25 Garber, “The collapse of the Bretton Woods fixed exchange rate system,” 461


Fall 2014

23

The Italian Job Voting Preference and Information Aggregation in the 2013 Italian General Elections Alberto Prati Columbia University This paper constructs a model of the 2013 Italian general election in order to examine the effects of the Democratic Party primary on the outcome of the general election. The model illustrates two interesting theoretical results: the use of a primary election as a mechanism for candidate selection may disadvantage a party in the general election if there occurs a Condorcet cycle in the preferences of the voting population, or if the timing of candidate announcement is unregulated. In both cases, the primary election mechanism can result in the selection of a candidate who will lose in the general election, potentially avoidable via an alternative selection mechanism. In order to focus on demonstrating this theoretical dynamic, the paper sacrifices some of the more complex and interesting elements of the 20013 Italian general election – in particular, the startling success of the Five Star Movement, which garnered 26% of the vote. In addition, the conclusion of the paper does not consider that in selecting a candidate to represent their party in a general election, party members may face other considerations besides maximizing the chance of their party’s winning the election. Indeed, primary elections exist to provide a procedure for internal competition within parties. Prati elides this complexity by making the simplifying assumption that the general election result can be predicted with certainty at the time of the primary. On the whole, however, we think that this is a neat paper which provides a valuable demonstration of its central theoretical claim. – D.B.F.

Political parties of Western Europe increasingly use primary elections to choose the candidate that the party will present at the general elections. The Italian left wing party Democratic Party (PD – Partito Democratico) has used this democratic tool since the party’s birth in 2007. This principle is clearly expressed at the beginning of the party statute, whose article 1.2 claims that “The Democratic Party commits the fundamental decisions concerning […] the choice of the candidature for the principal institutional positions to the participation of all its electors, women and men.” This procedure was also followed in fall

2012, when the coalition of left-wing parties organized primary elections to present a common candidate in the general elections, set for the following months. Two candidates were largely favored, both belonging to the PD: Pier Luigi Bersani and Matteo Renzi. In fact, in the first round of voting these two candidates dominated the elections, with the respective portion of 45% and 35% of preferences. In the second ballot, Bersani beat Renzi by obtaining the majority of the votes (60%). In the general elections of February 24th-25th for the constitution of the parliament, Bersani’s primary competitor

Columbia Economics Review

was Silvio Berlusconi, leader of the right wing party People of Liberty (PDL – Popolo della libertà). Although the scandals which had involved Berlusconi and had undermined his prestige, the coalition leaded by Bersani did not manage to obtain a clear majority and the outcome of the election was, de facto, a tie: 29.55% for Bersani and 29.18% for Berlusconi. A week later, in an interview with the Italian newspaper Il Messaggero, Renzi stated that he would have won the general election if he had not lost the primary election. Several columnists and political analysts shared this opinion, casting doubt on the efficiency of these primary


24

elections. This democratic tool seemed to fail to select the best candidate for the left-wing party, which was damaging for the left-wing constituency and for all the citizens in general, who were forced to make a sub-optimal pairwise choice in the general election. My aim is to build a simple model able to explain the cumbersome result of the Italian elections of 2013 and to answer the question: was holding a primary election an optimal choice for the PD? My analysis – that can be partially extended beyond the framework of these elections – focuses on the risk of counter-productive outcomes and the role of the timing of the selection of party leaders. I limit my analysis to a primary election with two candidates, where the winner faces a candidate of the opposing party. In the first section, I model the situation of the Italian elections according to four assumptions. In the second section, I solve the model assuming that people vote sincerely. In the last section, I relax the assumption of sincere voting and al-

Fall 2014

low strategic behavior in the choice of the candidate by both the parties and the voters. Under both assumptions, sincere and strategic voting, primary elections of the PD shows important possible adverse effects, which cast doubt on the efficiency of this democratic tool in some specific contexts. Assumptions of the Model This model simplifies the Italian voting system. I present the general election as a single-winner presidential election, while the real election is designed to state the distribution of the parties in the parliament; then, the leader of the party having obtained the majority of votes is charged by the President of the Republic with forming the government. The voting law used in the elections of February 24th25th 2013 was the Calderoli Law, a complicated semi- proportional system with a bonus for the major coalition and block voting, which was declared unconstitutional on December 4th 2013. The other

Columbia Economics Review

important difference is that we discuss the general election as a contest between two parties, while in reality it is a multiparty election. There are two main consequences of these simplifications. First, I assume people vote only looking at the election results for the government. In reality, however, voting for a party that is unlikely to win a majority is still meaningful in order to increase this party’s representation in Parliament. This element does not significantly affect the model since voters have a positive utility even if they vote for the losing party, as I will discuss later. Second, I exclude from the model the Movimento 5 Stelle, which obtained 1/6 of the seats at the Parliament. I) The candidates have two observable characteristics: orientation and traits. • The orientation (or ideology) of a candidate reflects his position in the Left-Right political spectrum, which I express through the bivariate variable O.


Fall 2014 • The traits (or characteristic quality) expresses a set of approaches, feelings, behaviors that arrange the candidate in a sharp position with respect to a particular dichotomy (young/old, with charisma/without charisma, etc.). I express it as T, a bivariate variable representing quality. In the following discussion, T expresses the distinction between reformist and conformist candidates, i.e. between old and new ruling class. This dichotomy, often discussed in politics, is dominant in the current Italian political debate. The conformists are members of the old ruling class, who had important roles in the government and within the parties for several years and who look for solutions within the current political schemes, i.e. they conform to them. The reformists are members of a new class, younger politicians who have recently attained some important roles and who are more willing to go outside the current political schemes, i.e. to reform them. II) I examine patterns of voting types based on ordinal preferences, determined by two observable criteria and voter’s own action. Every candidate j has a trait tj drawn from T= {conformist, reformist} and an orientation oj drawn from O= {left, right}. Every voter i has a trait τi, drawn from T and an orientation ωi drawn from O. The utility is also endogenously affected by voter’s preferences over her own action. I define a set αi= (0, 1) of actions, meaning voter i votes for her preferred candidate j (αi=1) or not (αi=0). According to the relative importance that each voter gives to orientation and traits, we distinguish two types of utility functions. The payoff of any i voter given by the election of any j candidate is a function of the exogenous characteristics of the elected candidate and of the endogenous action undertaken by the voter. •

Type 1:

Ui= 1ωi(oj) + 1τ i(tj)λ + αi ε, λ>1 0<ε< λ •

Type 2:

Ui= 1ωi(oj)λ + 1τ i(tj) + αi ε, λ>1 0<ε< λ

where 1ωi(oj) is an indicator function of the voter i with respect to the candidate j: 1ω:o - {0,1} defined as: 1ωi(oj)= 1 if oj=ωi 1ωi(oj) = o if oj≠ωi The voters make their choice according to their expected utility from the election of a candidate. For any voter i, 1ωi(oj) takes value 1 if the winning candidate j belongs to the same political orientation of the voter i, 0 otherwise; 1τ i(tj) takes value 1 if the winning candidate j possesses the traits considered valuable by the voter i, 0 otherwise. Thus, here I am assuming that certain traits play in favor of the candidate when there is correspondence (and not contrast) with the voter’s personality. This idea has been demonstrated empirically by Caprara et al. (1999), in a study of the Italian elections of 1994 which showed a significant correspondence between voters’ own traits and candidates traits. I call sincerity the element αi ε. The sincerity does not influence my analysis under the assumption of sincere voting, but only in case of strategic voting. The idea that the payoffs are influenced by endogenously generated preferences over actions is drawn from Feddersen & Sandroni (2001). Intuitively, this means that each voter has a bonus for voting sincerely or, conversely, he has a penalty for voting strategically. Thus, theoretically, it is possible to distinguish 2³=8 types of voters (the combinations of two types of utility functions, two realizations of τi, and two realizations of ωi) • Reformist left-winger: reformist, left-wing, utility function type I • Reformist right-winger: reformist, right-wing, utility function type I • Left-wing reformist: reformist, left-wing, utility function type II • Right-wing reformist: reformist, right-wing, utility function type II • Conformist right-winger: conformist, right-wing, utility function type I • Conformist left-winger: conformist, left-wing, utility function type I • Left-wing conformist: conformist, left-wing, utility function type II • Right-wing conformist: conformist, right-wing, utility function type

Columbia Economics Review

25 II For our model, I select four types of voters from this set of eight types, namely: • Reformist left-winger: Care first about the quality of “reformist”, second about the orientation “left” (utility function type I). • Reformist right-winger: Care first about the quality of “reformist”, second about the orientation “right” (utility function type I). • Left-wing conformist (or pure left-winger): Care first about the orientation “left”, second about the quality “non reformist” (utility function type II). • Conformist right-winger (or pure right-winger): Care first about the orientation “right”, second about the quality “non reformist” (utility function type I). For the first two types, I consider it reasonable that people who value a distinctive quality (here “reformist”), care more about it than about the orientation of the candidate; then reformist left-winger and reformist right-winger are rejected. For the second two types, the justification is more empirical. According to a national IPSOS survey in 2012, people who were willing to vote for Berlusconi or other parties appreciated Bersani, but not vice versa. This can be explained by the hypothesis that the constituency of the conservative wing is particularly averse to anti-status-quo results (i.e. reformist candidates).

“According to a national

IPSOS survey in 2012, people who were willing to vote for Berlusconi or other parties appreciated Bersani, but not vice versa” III) People always vote according to their preference, (there is no sophisticated voting). This assumption depicts voters as naïve, in the sense they do not behave strategically. In the general elections of our model, there is no useful sophisticated voting: it is always better to vote for the candidate you actually prefer. However, this assumption plays an important role in the primary elections, where voting for


26 the favorite candidate is not always the optimal choice. I will further discuss this assumption later. IV) There is common knowledge of the preferences, i.e. each voter knows the ranked preferences of the other voters. This assumption seems particularly strong because it implies that the result of the election is perfectly predicted by each voter. However, I justify this assumption by stressing the following facts. Political polls have shown high precision in forecasting election results. The citizens actually vote in an uncertain environment, but they receive very reliable signals about the outcome of the elections. Thus, each person knows with a high probability the distribution of the all voters’ preferences. Pushing in this direction, it seems reasonable to think that each person knows with certainty the distribution of all voters’ preferences. This assumption does not make the vote of the losing parties meaningless since voting for the preferred candidate gives a positive payoff anyway. That is

Fall 2014 due to the αi ε element in both the utility function. Therefore, the vote for a losing party is still useful as an endogenous utility generator. This assumption does not influence the model as far as people are voting according to their preferences. However, it will play an important role in the second section of the paper, when I will abandon assumption III.

“thus, each person knows with a high probability the distribution of the all voters’ preferences. ” Sincere Voting Model I suggest here a model that is supposed to predict the real result of the elections of 2012-13 in Italy. I. The Preferences

Columbia Economics Review

There are 3 candidates: Pierluigi Bersani (P), Matteo Renzi (M) and Silvio Berlusconi (S). P and M are left-wing, S is right-wing; M is reformist, while P and S are conformist. Formally, j∈J= {P, M, S}, Om=oP ≠oS, tP=tS ≠tM. The mechanism of voting is a pairwise comparison, with P facing M at the primary elections and S facing the winner of the primary elections. Given the four types of voters previously described, I suggest n=6 voters having four different types of preference profiles. Of course, the number of voters belonging to each type is meaningful only in relative terms. Thus, if I say that the electorate is composed of two left-wing reformists and only one conformist leftwinger, it is the same as thinking of 10 million left-wing reformists and 5 million conformist left-wingers. I select n=6 because this is the minimum number of voters able to reproduce the results of the Italian primary and general elections of 2012-13. Indeed, in the primary elections the left-wing constituency was split approximately into: 1/3 of


Fall 2014 voters who supported Renzi and 2/3 of voter who supported Bersani. Then we need at least 3 for the left-wing voters. Afterward, the general elections ended up with a tie, so we need an even number of voters. n=4 satisfies both conditions but it requires that at least one voter has the order of preferences: R>B>P, which is inconsistent with our voters’ preferences. n=6 is the subsequent even number > 3, and for n=6 we can sketch a set of preference profiles consistent with our utility functions. Formally, i∈I= {1, 2, 3, 4, 5,6}, o1=o3=o5, o2=o4=o6, o1≠o2; t1=t2, t3=t4=t5=t6, t1≠t3 The preferences of the six voters are: • • • • • •

M>P>S (reformist left-winger) M>S>P (reformist right-winger) P>M>S (pure left-winger) S>P>M (pure right-winger) P>M>S (pure left-winger) S>P>M (pure right-winger)

The choice of these six types of candidates is strictly tied to this specific case and it has no pretension of generality. II. Multidimensional Analysis

Let me start my analysis by discussing a spatial comparison of the utilities of the six voters. I ignore for a moment the voting rule and look only at the utility of a voter depending on which candidate is elected. The best way to represent the utility function would be a four-dimensional graph, where the utility is a function of the three dimensions that determine voter’s choice: orientation, traits and sincerity. Unfortunately I cannot represent more than three- dimensions, but since here sincerity does not affect voters’ utility in the ordinal comparison of candidates (because of assumption III), I can ignore this variable for the moment and assume ε=0. Let me focus on the utility function of a left-wing conformist as an example of bidimensional analysis. His utility would look like fig.1 (I arbitrarily choose λ=1.2). Fig 1 has three axes but it represents a bidimensional analysis: the graph has three dimensions because it relates two dimensions (orientation and traits of the candidate) to the utility of the voter. The “x-axis” of the graph is the plane describing the orientation and the traits. The “y-axis” of the parallelepipeds represents the utility of the voter associated with each area of the orientation-traits plan. The graph indicates that the ideal candidate of the left-wing conformist

Columbia Economics Review

27 voter is a politician that falls into the categories of both: left wing and conformist (giving a utility equal to 2.20). By contrast, the worst candidate is one who is at the opposite corner, a right wing reformist (giving a null utility). Usually spatial models are used when the variable are continuous, not discrete. To flesh out the difference, consider for a moment the traits and the orientation to be continuous variables from a [0, 1] interval, rather than discrete variables from a {0, 1} set, such that 1ω: o → [0, 1]; 1τ: t → [0, 1]. What would the graph and the left-wing conformist indifference curves look like?

“the ideal candidate of the left-wing conformist voter is a politician that falls into the categories of both left wing and conformist” To answer this question, let me suggest a more convenient representation, offered by fig. 2, where I represent the traits-orientation plane as if looking directly down on it. On this plane, it is possible to distinguish an ideal point for a left-wing conformist voter, indicated by the red triangle. According to the previous claim that the voter appreciates the correspondence of his position with the candidate’s position, this point must represent the characteristics of the voter himself. Thus a leftwing conformist’s ideal point falls in the south-west part of the graph. The basic rule used in spatial modeling to calculate voter’s choice is that each voter votes for the candidate whose location minimizes the radial distance from his ideal point. However this rule does not take into account the relative importance that the voter assigns to each criterion, orientation and traits. Therefore, I can slightly modify the previous rule, by stating that each voter votes for the candidate whose location minimizes the ellipsoidal distance from her ideal point. The ratio of the major and minor axes is λ. The ratio of the y-axis difference and x-axis difference is bigger than the difference for the conformists and the reformists, smaller than the difference for the right-winger and the leftwinger. The indifference curves of a left-wing


28

Fall 2014 (lower-right in the graph), of 1.2 from the election of M (upper-left) and of 2.2 from the election of P (lower-left). Hence, the order of preferences is still: P>M>S. Indeed, this is the “x-axis” of fig. 1. III. Uni-dimensional Analysis: Condorcet Cycle and Single-Peakedness The previous paragraph looked at the evolution of the utility functions as a function of more than one variable. However, it is possible to merge the contributions of the orientation, the traits and the sincerity on a single axis without loss of generality. Thereby I collapse a multidimensional arrangement to a unidimensional arrangement, making the analysis simpler. The process is simple: I just compare a voter’s total utility (y-axis) to each candidate (x-axis). The graph below shows the utility functions of the four types of voters (we arbitrary chose again λ=1.2 and ε=0).

conformist are represented in fig. 2. The red triangle indicates the voter’s ideal point. The indifference curves move away from this point as elliptic waves and the closer a curve is to the ideal point, the higher the utility is. Following this criterion, it is clear that P is the candidate giving a higher utility, M is the second and S is the third one. This order is consistent with the preferences of voters 3 and 5: P>M>S. Why then did I choose a discrete codomain? A utility function which can take an infinite number of values on a continuous codomain requires two conditions. First, the candidate must be able to perfectly signal his position in the orientationtraits dimensions. For example, Renzi should be able to convey the information not only that he is left-wing and reformist, but how much he is left-wing and how much he is reformist. Second, the voter must use this detailed information to make his subsequent decision. These assumptions are not realistic—when a voter makes his decision with a small set of candidates, he weighs only a few simple bivariate pieces of information (e.g. is he left-wing? is he right-wing?) Therefore, let us come back to the utility functions presented in assumption II, where 1ω: o → {0, 1}; 1τ: t → {0, 1}. What are the key difference between the discrete and continuous cases? Instead of infinite

indifference curves there are only four indifference areas in the discrete scenario: conformist & left wing, reformist & leftwing, conformist & right wing, reformist & right wing. As a corollary, there is not a voter’s ideal point but only a voter’s ideal area.

“when a voter makes his decision with a small set of candidates, he weighs only a few simple bivariate pieces of information” The situation is drawn in fig.3. The four areas are marked by different colors representing the level of utility the voter gets from the election of a candidate belonging to that area. What does it mean when a candidate “belongs” to a specific area? Each candidate is represented by some location in the plane. Since the representation is discrete, any point within the same area is equivalent. In fig.3 the location of M, P, and S within the respective area is totally arbitrary. The darker the color is, the higher the utility is. A left-wing conformist would obtain a utility equal to 1 from the election of S Columbia Economics Review

“First, we notice that the preferences are not single-peaked.” What does this graph can tell us? First, we notice that the preferences are not single-peaked. Black’s Single-Peakedness Theorem states that in a set of three alternatives from which a group of voters must make a choice, if in the ordered preferences of each voter one of the alternatives is never worst among the three, then majority rule yields a group preference that is transitive. The geometric interpretation is that the preferences of the voters are singlepeaked when the utility functions representing preferences over the three alternatives have a maximum at some point on the line. Fig. 4 clearly shows that our six voters do not respect this condition. M is the worst option for the pure rightwinger, P is the worst option for the reformist right-winger and S is the worst ranked by both the pure and the reformist left-wingers. Since the preferences are not single peaked, there is no Condorcet winner. A Condorcet winner would be the candidate who wins with simple majority rule against each of the other candidates in a pairwise comparison. Here, P is strictly preferred to M and M is strictly preferred to S, yet a comparison between P and S yields a tie. This configuration is called Condorcet cycle and express the fact that


Fall 2014

knows by backward induction that if M is elected, M will not be able to win the general election against A. On the contrary P will beat A. Again the reformist left-winger will vote M, since this strategy strictly dominates P. Then only the outcomes of the bottom matrix are possible. The two left-

wing conformists will always play P, since this strategy strictly dominates M. The situation is turned upside down: the expected utility of the left-winger voters is maximized when both voting P, given that the next opponent will be A. Everybody votes according to his preferences, since this is the best strategy for

Columbia Economics Review

29

every voter. The only Nash equilibrium is given by the strategy profile {P, P, M}, where P wins the primary elections with 2/3 of preferences. Indeed this result corresponds to the real outcome of the primary elections. In the light of these last considerations, let’s try to explain Berlusconi’s behavior.


30 If the electors of the PD party are aware of the real candidate of the PDL party, the candidate of the PD will always win. If S is the official right-wing candidate, half of the left-wingers vote strategically and elects M. If A is the official rightwing candidate, the left wingers vote P. In both cases, the candidate of the PD wins thanks to 4 preferences. However, if the PDL transmits a credible false signal about its candidate in the general election, the election will result in a tie, regardless of which leader of the PD is elected. If the PDL announces A as leader, P will win the primary but he will actually face S in the general election (this is what actually happened); if the PDL announces S as leader, M will win the primary but he will actually face A in the general election. The PDL faces a problem of time inconsistency: the best plan for the party changes after the realization of the PD’s primary election. III. Failures of the Primary Elections with Sophisticated Voting The case of the Italian elections of 2013 stresses the importance of the agenda in the selection of party leaders. The parties experience a second-mover advantage in deciding who will represent the party at the elections. In particular we can distinguish two situations, either when both parties hold the primary elections or when only one party holds primaries: 1. If both parties organize primary elections, each party has an incentive to set its primary elections after the opposing party’s primary elections. Thereby, the voters would have a very reliable signal about who their elected candidate will face at the general elections and they could use this information to vote strategically.

“each party has an incentive to set its primary elections after the opposing party’s primary elections” 2. In a two-party system, if only one of the parties chooses its leader through primary elections, this could represent a handicap for this party. Being that the result of the primary elections

Fall 2014 are unquestionable, the party has no liberty to strategically change its leader in response to the other party’s leadership choice. Moreover the party that has a central decision of its candidate has an incentive to send a credible false signal about who will be its leader at the general election, as it happened in the Italian elections.

“recent history shows a different world” Theoretically, both situations would generate a second-date auction game where both parties announce their leaders or fix the date for the primaries as late as possible. If this deadline is common (for instance if it is decided endogenously by some institution), then both parties set the primaries/announce the leader at this date and there is no second-mover advantage. However recent history shows a different world. In Italy there is no law obliging the parties to hold their primaries on the same date. There is a deadline to announce the electoral list (included the top candidate) but primary elections are inevitably set well before this deadline. The result is a disadvantage for either the party holding primary elections (if it is the only one), or the party that holds its primary election first (if both parties use this democratic tool). Conclusions I built a simple model able to describe rather well the results of Italian general election of 2013 and of the left-wing primary elections of 2012. In this model, the voter’s decision was based on the mutual relationship of his characteristics and candidate’s. I restricted this perspective to the observability of two dimensions: the orientation and the traits. The Italian case of 2013 suggested that I should associate the dichotomy of reformismconformism to the binary variable of the traits. I suggested a credible distribution of voters and candidates. I discussed the implications of the finite codomain set of the utility functions I chose, by comparing a multi-dimensional analysis to a collapsed uni-dimensional analysis. The lack of single-peaked function demonstrated the presence of intransitive pref-

Columbia Economics Review

erences and therefore the presence of a Condorcet cycle. This peculiarity stressed the risk that the party holding primary elections elects the weaker candidate for the general election. This risk would be eliminated by allowing the whole population, not only the left wing, to vote in the primary elections. Nevertheless that would realistically imply a high risk of sophisticated voting by at least a part of the right-wing constituency. Thus, I extended my model by allowing people to vote strategically, under the penalty of a small moral cost. The model still represented Ill the results of the elections when I introduced an additional element: the upheaval in the right wing leadership right after the left wing primary elections. I interpreted this cumbersome behavior as a consequence of the time-inconsistent preferences of the right wing party: the best leader before the left-wing primary elections became the weakest candidate right after the primary elections.

“the right party was able to take advantage of the sophisticated behavior of the left-wing constituency” Through the transmission of a credible false signal, the right party was able to take advantage of the sophisticated behavior of the left-wing constituency. Hence, my model highlights the importance of the timing in the announcement of the leader of the party. In the absence of an institutional regulation of this timing, the efficiency of primary elections is weakened. In particular this happens if only one party holds primary elections or if one party sets them first. n


Fall 2014

31

Guns, Gems, and Steal Revisiting the Political Resource Curse: An Ideological Blessing? Mariela Szejnfeld Sirkis Columbia University & Sciences Po Sirkis’ paper examines the dynamics of the resource curse, a paradox by which countries abundant in natural resources tend to perform more poorly on indicators of economic development than their less well-endowed peers. The existing literature on the subject ascribes various factors as the causes of the resource curse, including underdevelopment of other economic sectors and corrupt institutions. This paper builds on the literature by investigating the effects of windfall government revenues from natural resources upon leadership and politics in the affected country. By looking closely at ideology and public spending, the author argues that such windfalls following events like oil discoveries can indeed increase general welfare, as well as mitigate moral hazards when leadership is highly ideological. Although the results of the author’s OLS specifications do not fully support the theoretical hypothesis, we believe this paper creates a strong foundation for future research. As the author notes, further analysis on the subject would do well to examine a larger dataset of elections with improved controls for ideology. This may indeed bring about results in line with the findings of the theoretical model. -A.C.

Economists and political scientists generally perceieve natural resource wealth as both a social and economic curse. Although the economic ramifications have been investigated, disagreement still remains on whether there is also a political curse. Similarly, not much has been argued about the role of leaders in dealing with it. Ironically, most countries affected by the resource curse are those whose institutions have failed to take root, yet stand out for the strong leadership of local political leaders. With this in mind, how are windfalls of resources beneficial to society? Do benefits and allocations vary with characteristics of leadership, such as ideology or degree of populism, all other factors held constant? Some of the earliest empirical evidence arguing for the existence of a social resource curse was provided by Barro (1999) and Ross (2001), who claimed to have found a negative relationship be-

tween a country’s level of democracy and the share of fuel exports in its GDP. More recently, scholars have focused on arguing that some natural resources, such as oil, may fuel internal armed conflict by provoking competition over resources between groups (Collier and Hoffler 1998 and 2004, Reno 1999, Garfinkel and Skaperdas 2007, Caselli and Cunningham 2009, Blattman and Miguel 2010, Acemoglu et al. 2010). Little has been said, however, about the effect of windfalls on leadership and issues of political agency: how do oil discoveries affect representatives’ ability to exploit their political power and appropriate resources for themselves at the voters’ expense? How does the windfall of resources affect voters’ readiness to discipline politicians through the implicit incentives that elections offer? These are fundamental questions given that corruption, political patronage and Columbia Economics Review

‘populist policies’ have been blamed for preventing international aid to lagging countries to effectively deal with underdevelopment. Through a tailored household survey, Vicente (2009) shows that the discovery of oil on the island of São Tomé and Príncipe was associated with a significant rise in perceived corruption, relative to the control island of Capo Verde. Moreover, Caselli and Michaels (2009) show that oil discoveries in Brazilian municipalities have a positive impact on public spending, but little effect on the quality of public good provision, and suggest this might be due to rent-seeking and corruption. In this paper, I argue that windfall government revenues following events such as oil discovery can increase general welfare and decrease the moral hazard effect of an exogenous increase in revenue, when politicians in office


32 are highly ideological. I take as a departure point both Brollo, Nannicini, Perotti and Tabellini’s model, as well as Persson and Tabellini’s (2000) model on political agency. By including a measure of ideology in the incumbent’s utility function, and by allowing for the possibility of ideological spending by candidates, I show that windfalls can lead to improving the utility of non-pivotal groups within a society, as opposed to increasing political rents. I define ideological spending as public spending that is not directly aimed at increasing the incumbent’s probability of reelection. This model can be useful to understand how the allocation of resources varies across different political regimes, as well as political practices such as clientelism. For my empirical analysis, I focus on the case of the Brazilian states of Rio de Janeiro, S˜ao

Fall 2014 Paulo, and Espirito Santo. The outline of the paper is as follows: in section 2, I develop my conceptual framework; in section 3, I derive its most relevant empirical implications; in section 4, I focus on the details of the empirical analysis in Brazil; and in section 5, I conclude and develop strategies for further research. A Model of Ideological Spending In this section, I base myself on the “career concerns” model developed by Persson and Tabellini (2000), as well as on Brollo, Nannicini, Perotti and Tabellini (2010)’s “selection of politicians” model, to estimate the effect of candidates’ ideology on rent-seeking and probability of reelection, following the discovery of natural resources. I update

Columbia Economics Review

[author names] framework to include a measure of the incumbent’s ‘degree of ideology’, ideological spending and the effect of spending on the incumbent’s ego rents – ie. non-monetary exogenous benefits from being in office-. I also include a term to account for how much candidates’ care about the general well-being of all their constituents. For simplicity, I assume only two periods (t = 1, t = 2). I refer to the politician in office as the incumbent governor. I consider the case of two types of candidates, one of type H and another one of type L, corresponding respectively to a highly ideological candidate and a low ideological one. The population is constituted of two sub-groups, i and j = −i, with i > j. The former assembles all politicized and politically active individuals and the latter,


Fall 2014 a minority group of unpoliticized and politically inactive individuals Therefore, I assume members of group j are non-pivotal for reelection. More- over, only members of i benefit from the public good, so that their utility increases when gi increases, but that of group j does not. The preferences of members of group i in each period are: (1)

W t = g it

so that voters only care about the public good. Thus, a higher value of t increases candidate’s probabiltiy of being reelected, although it doesn’t increase the utility of group j. In t = 1 an incumbent governor sets policy for that period. Elections are held between the two periods. In the second and last period, t = 2, the elected governor sets a new policy. In the two periods a budget of fixed size τ can be allocated to three alternative uses: a public good gi that benefits members of group i, private rents rp that only benefit the governor, and ideological spending rj that only benefits members of group j. Thus, we assume: (2)

r t ≡ r pt + r jt

I define ideological spending as spending that is not aimed at increas- ing the incumbent’s chances of reelection. My assumption is that the more ideological the incumbent is, the greater the percentage of τ he will allocate towards the minority group j, even though doing so jeopardizes his chances of reelection. The discovery of oil allows the ideological incumbent to increase 1 without needing to diminsh his chances of reappointment by decreasing gi .

“being highly ideological is ‘more costly’ than being a little ideological.”

probability of reelection for type H incumbents, as opposed to type L incumbents. It is as if being highly ideological were ‘more costly’ than being little ideological. The advantage of including ideological spending rj as a part of total rents rt allows us to consider both the cases in which said spending is done officially, for example, through a social program, as well as unofficially, as with clientelistic practices. (3)

g it = τ − r pt − r jt

or

r jt = τ − r pt − g it

where the policy can be thought of as rents rp captured by the governor in that period, plus the public good provided to i, while ideological spending rj is residually determined from the budget constraint. The utility function of the governor who is in office in periods 2 and 1 respectively is: (4)

V2 = gi2 / T + arp2 + R(Uj)

(5)

V1 = gi1 / T + arp1 + R(Uj) + pV2

where p is the probability of being reelected as perceived by the incumbent in period 1, when setting the optimal rents rp and rj , and the parameter α measures to what extent the candidate values private rents, taking into account the utility loss he will suffer if he’s caught. I assume that 0 ≤ α ≤ 1, so that the higher is α, the lower are the transaction costs for rent appropriation gi. The term τ indicates that, regardless of their type, incumbents also care about the general well-being, and providing more public goods increaes their utility, as they are perceived as being ‘better’ representants. Type H incumbent’s utility depends on the utility of group j through R with: (6) and (7)

In other words, the increase in revenue allows the governor to at least keep constant the probability that he’ll be reelected, without sacrificing welfare of group j. Hence, in my model, there is a lower correlation between public spending and

R = Uj x p

Uj=rjt

where p stands as a measure how ideological a candidate is —ie. his ‘degree’ of ideology—, and is such that 0 ≤ p ≤ 1. The more ideological the governor is, the higher her/his p. In other words, following Dickson and Scheve’s (2003) model

Columbia Economics Review

33 of identity behavior in electoral competition, the governor’s ego rents depend positively on the welfare of their group of ideological predilection j. In period 1, public spending towards group j increases the governor’s utility through R but also decreases it because it diminishes his chances to be reelected. The higher is ρ, the higher will be the incumbent’s preference to increase group j’s utility rather than ensuring reelection in period 1. It follows that R is 0 for type L incumbents. I assume the ‘level of ideology’ ρ is a random variable uniformly distributed with density ξ. The realization of ρ is drawn from two alternative distributions, depending on the level of ideology of the incumbent. For an in- dividual of type J , the mean of ρ is 1+σJ where J = H, L and σL = σ = −σH , with 0 < σ < 1. Specifically, as ρ is drawn from a uniform distribution with density ξ and mean 1 + σJ : (8)

Pr [p > X] = 0.5 + ξ (1+σJ - X)

Hence, an individual of type H is on average more ideological , but in specific instances it could be that the actual ideological level of an incumbent of type H is lower than that of type L. For equal levels of α, type H incumbents will take more total rents that type L incumbents. This is also true when α < p: very ideological candidates will maximize ideological rents only. As incumbents of type L will capture no ideological rents, they are on average more likely to deliver a higher gi if reelecetd to office. Voters don’t know the value of private / ideological rents, but they are aware that the lower total rents are, the higher gi is. Because they know more ideological governors allocate a small part of the budget to the public good, they will vote for less ideological candidates. The exact realization of p only becomes known to voters once the candidate is elected to office, at the end of period 1. I assume J = H, L to stand for an objective observable variable, such as party affiliation, which is known to everyone beforehand. At the time of elections, voters also observe 1 , but they don’t observe private and ideological rents. This setting reflects a central feature of conflicts of political agency in career concern models: the voters’ imperfect information about the candidates’ degree of ideology incen- tivizes the incumbent to increase gi , so as to appear less ideological and more moderated. Although party affiliation might give voters a hint about how ideological a candidate is — a cen-


Fall 2014

34 trist candidate will be perceived as being less ideological than a leftist candidate — it only constitutes a partial and imperfect measure of ideology. I assume the candidate’s type J = H, L to be exogenous.

“Although party affiliation might give voters a hint about how ideological a candidate is... it only constitutes a partial and imperfect measure of ideology.” Finally, we also assume that neither personal nor ideological rents can exceed a given upper bound that corresponds to the size of the budget: (9)

rpt = VT = r0

and

rjt = VT = r0

The timing of events is as follows: • At the beginning of period 1, the incumbent sets rp and rj , so as to maximize his utility. • Elections are held. When voting, voters observe gi but they do not observe rents, which are only relevant to them to the extent that they reduce the public good. Voters know the type J = H, L of other candidates but they do not know their true ideological level p. • In period 2 the elected governor sets rp and rj . Analysis I proceed by solving the model back-

wards, focusing on the case of incumbents for which p > α. In the last period, the governor maximizes rents, as there is no posterior election for voters to punish him. The candidate of type L will maximize private rents, as he is indifferent to ideological rents. Candidates of type H maximize (3) subject to (2) such that:

max: V2 =gi2 / T + arp2 + prj2

subject to: g 2 = T - r p2 - r j2

We can now consider the voter’s behavior in period 1. In what concerns them, the policy is the same in period 2 for both candidates, so voters only care about candidates’ degree of ideology, and they vote for the one with the lowest expected ideology, as they know the latter will capture less total rents. An incumbent of type J wins against an opponent of type O if: (10) E (p | gi1, J) b 1 + σO J, O = H, L where the left hand side is the expected value of ρ conditional on the voters’ observation of gi and their knowledge of the incumbent’s type J , while the right hand side is the unconditional mean of p for an opponent of type O. Thus, voters will vote for the least ideological candidate, as the latter is more likely to provide a greater gi . Based on our assumptions about voters’ behavior, we can model the probability of reelection of an incumbent of type J running against an opponent of type O as follows: (11) pJO = 1 - (1 - (git/T))2 This specification for the probability of reelection is coherent with our mod-

Columbia Economics Review

el’s assumptions: the population will vote for the candidate who pro- vides a greater amount of public good gi , relative to the budget size τ . Hence, reelection is an increasing function of budget size, but with diminishing marginal probability: each extra unit of gi that voters receive increases the probability that they will reelect the incumbent by less than the previous unit of gi they received. Figure 2 shows the effect of increasing the public good on reelection. We can now discuss the determination of policy in period 1. The incumbent maximizes (4) subject to (2), taking into account the probability that he’s reelected. For the purpose of this paper, we are most interested in studying the case of relatively higher ideological candidates, we will focus on the case in which p > α, implying: V H = τ p and rp = 0.

Figure 1,below left: The probability of relection is a linear function of as long as p b α and is monotonically decreasing for p > α. Figure 2, below right: The probability of

reelection is an increasing function of the public good, with diminishing returns.

(13) We can now solve for the optimal ideological rents: (14) and


Fall 2014 (15) The result above shows that ideological spending in period 1 is an increasing function of ideology ρ and of budget size τ . We can now show that reelection is a decreasing function of ideology ρ: (16) (17) (18) We can now state the main properties of the equilibrium, focusing on the effect of oil discoveries —ie. an increase in the expected budget size τ. Proposition 1 Both ideological and private rents are an increasing function ∂rJ of the budget: ∂rj1/ ∂τ < 0 This is a direct implication of (15), along with the assumption that there must be strictly positive rents at an interior optimum. Note, however, from (18) that the equilibrium probability of reelection depends only on ideology and not directly on the level of rents. Proposition 2 Ideological spending is an increasing function of oil discovery:

nity to increase U j , instead of increasing reelection, while non-ideological candidates would invest the windfall in private rents or in increasing their probability of reelection. I will now provide empirical support for the propositions of my theoretical model. Empirical Testing In this section I describe the data and econometric strategy that I use for my empirical analysis, as well as my findings. I focus on the states of Rio de Janeiro, Espirito Santo and Sao Paulo, where most oil discoveries in Brazil have taken place. States in Brazil are semi-autonomous self-governing entities organized with complete administration branches, and relative financial independence. For my analysis, I use data from governmental elections from 1986 to 2012. I select Brazil as a case study because of its strong federal nature, for which I can assume the general level of ideology of politicians to be relatively high.

“oil discovery allows [the candidate] to increase the welfare of the group”

∂r t/ ∂τ < 0 j

This also follows from (15): the greater the budget, the more the candidate can spend on increasing U j , without compromising his chances of reelection. As we assume candidates’ first priority in period 1 is to ensure reelection by providing a higher gi , oil discovery allows him to also increase the welfare of group j in period 1, even though this does not increase his chances of being reelected. Proposition 3 For type H incumbents, reelection is a decreasing function of oil discovery: ∂pH/ ∂τ < 0 This result follows from (18), as we can see both a higher ρ for a fixed τ and a higher τ for a fixed ρ lead to a lower probability of reappointment. Intuitively, and for Proposition 2, ideological candidates will take the oil discovery as an opportu-

I construct a reelection score ranging from 0 to 2.5. A description of the criteria used to construct the score can be found in the appendix, section 7.2. Given that reelections are rare, the score allows for greater variability in my dependent variable. I furthermore obtained data on public spending from the Instituto Brasileiro de Geografia e Estat´ıstica (IBGE). Given the various changes of currency from 1986 up to the adoption of the real in 1994, I normalized public spending data by expressing it in terms of the respective year’s GDP in each state. This also allows to control for difference in wealth between states, and for inflation. Finally, I measure oil discoveries through an ordered variable which can take values 0, 1 or 2, according to whether there were none, one or more than one oil discoveries in the year of the governmental election or the previous year. I used

Columbia Economics Review

35 the dataset on giant oilfield discoveries by Lei and Michaels (2013), which at the same time builds upon previous datasets from Horn (2003, 2004) Halbouty et al. (1970), the Oil and Gas Journal Data Book (2008) and Ross (2010). For the purpose of this paper, I have centered my analysis on oil discoveries in Brazil, although given the available data, it would be conceivable to conduct a cross-country analysis in future research, as well as to control for variables such as quantity and quality of oil discovered. Thus, my dataset consists on a time series for governmental election years in each state, and three main regressors: a measure of oil discoveries, a measure of public spending, and an interaction term of the two. I use state fixed effects to control for differences within each state. Given limitations of time, available data, and my exclusive focus on the Brazilian case, my dataset only contains 21 observations. Although the sample size is too small to allow for statistically significant results, I obtain some interesting findings which are worthy of discussion and which set the stage for more thorough research. I use an OLS and a rare events logit specification: (i) reelec_prob = αit + β1Oilit + β2 Spendit + β3Oil Spendit + εit (ii) relogit reelecprob = αit + β1Oilit + β2Spendit + β3Oil Spendit + εit As oil discoveries are random and rare, I use a rare events logit in my sec- ond specification. In order to dichotomize my dependent variable, I recoded the score as a dummy, with every value below 1.25 -inclusive- being recoded as 0 and every value above that, as 1. Tables 1 and 2 with the results for each of the specifications can be found in the appendix, section 7.1. Table 1 estimates the results for the OLS regression using the ordered reelection score ranging from 0 to 2.5. Firstly, it’s important to note that adding the fixed effects produces barely any change in the coefficients of my variables of interest. I find that that oil discovery and public spending have a negative effect on the probability of reelection by themselves, but the interactive effect of oil discovery and public spending is positive. Looking at Table 1, the comparison of columns (1) and (2) provides an interesting finding: without the interactive term, oil discovery has barely any causal effect on reelection. However, once we control


36

Fall 2014 would be either allocated to gi or rj —recall that for ρ > α, rp = 0 In this sense, to the extent that the general population will punish the incumbent for not increasing spending following an oil discovery, the latter would be incentivizing ideological incumbents to operate through institutional chan- nels. In other words, to increase the welfare of j through official policies instead of clientelism. Members of group i can observe official policies, while they can’t observe clientelism to the same degree —and in any case, they’re not interchangeable, as clientelism is pejoratively connoted—. Yet, as long they perceive an increase in spending, even if the windfall doesn’t increase their welfare, voters of group i might be willing to reelect the incumbent. If ideological incumbents are aware that a windfall that isn’t followed by an increase in spending decreases probability of reelection, the oil discovery might act as mechanism for greater institutionalization of spending. Finally, it is interesting to note that the constant in column (2) of Table is higher than 1, suggesting that probability of reelection is already quite high, which could bias the results. Conclusion

for the interactive term, the coefficient of our oil discovery variable increases by .933 in absolute terms and becomes negative. This suggests that if an oil discovery does not come with a commensurate increase in public spending, voters will punish incum- bents during the next round of elections. We cannot reach any conclusions for public spending on its own. Table 2 shows the results for the OLS and rare events regressions using the dichotomous score. Although using the rare events logit does not allow us to obtain more significant results, the signs of the coefficients coincide with those of the OLS regression. It is difficult to assess the validity of my theoretical model with the em- pirical results. The fact that the interaction

of oil discovery and spending increases the probability of reelection questions the accuracy of my Propo- sition 3, even though I find a negative effect for oil discovery on its own. A theoretical model that makes the distinction between the initial budget size and the extra budget brought in by the oil discovery might prove to have greater explicative power. Another point to bear in mind is that I assume members of group j to be nonpivotal. In reality, it is hard not to incentivize individuals to vote for a candidate who increases their well-being. If ideological spending takes the form of official policies targeting group j — thus technically counting as ‘public spending’— it would seem plausible that oil discoveries would in- crease the incumbent’s chances of being reappointed, as the budget

Columbia Economics Review

I have readdressed the political resource curse from a new perspective, by analyzing equilibrium levels of rents and probability of reelection of highly ideological candidates By allowing candidates to invest part of the budget in ideological spend- ing, i.e., spending whose purpose is not to increase chances of reelection, I have tried to show that a windfall of resources does increase the welfare of society when governors are relatively more ideological. This issue is relevant because political corruption is often cited as the cause why lag- ging countries who receive additional funds from international organizations do not manage to escape their economic backwardness. However, if my findings are true, perhaps funding should be directed towards states with more ideological resources. For the purpose of this paper, I have interacted ideology with two dif- ferent political processes: the effect of oil discoveries on budget allocation, and on reelection. I find that relatively more ideological candidates capture less private rents for themselves and more ideological rents, but provide less public good. In comparison, relatively low


Fall 2014

37 A valid option to increase the number of observations and obtain more statistically significant results, would be to include legislative candidacies as well as governmental ones. This, however, would be to the detriment of my theoretical model. For the control of ideology, an interesting option would be to replicate the empirical test for the United States — another strongly federal country—, and evaluate governors’ ideology through a systematic assessment of their State of the State addresses. Alternatively, and close enough to the concept of ideology, one could assess the level of populism, and see whether it increases following an oil discovery. Finally, controlling for the quality and quantity of oil discovered, following Michaels and Lei (2013)’approach might also help me to obtain more robust results.

“they assume a world with no government while their behaviour is actually the key element. “

ideological candidates will capture more private rents —as long as the cost of doing so is not too high— but provide more public good.

“these findings make a case for questioning what we define as political corruption” Although the amount of public good provided is higher for low ideology candidates, I believe these findings make a case for questioning what we define as political corruption, and the meaning we grant to ‘populist’ spending. Moreover, I find the probability of reelection of ideological candidates decreases with oil discovery, as if being more ideological

were, in a sense, more expensive that being little ideological. Given these theoretical results, I have investigated how my findings apply to the specific case of state elections in Brazil. The small sample size and noise in the data prevent me from reaching substantial conclusions. The interaction of oil discoveries and spending increases the probability of reelec- tion —unlike what my theory predicted—, but the significant difference in the effect of oil discovery with and without controlling for the interaction term suggest there might be a mechanism at play worthy of further research, that my theoretical model might not have grasped for having collapsed the initial budget and the windfall in one unique term. More fine empirical research would also necessitate that I control for ideology: although the effect of the interaction term is positive, it might be smaller than for low ideol- ogy candidates. This seems plausible to the extent that, ceteris paribus, the probability of reelection appears to be high in my sample.

Columbia Economics Review

The empirical literature on the resource curse has consistently emphasized that resource dependent economies and windfall revenues seem to lead to highly dysfunctional state behavior, particularly large public sectors and unsustainable budgetary policies. Following David Newberry (1986), Robin- son, Tordik and Verdier (2006) argue that economists have had a missing element in their interpretation of the poor performance of resource abundant countries, since they assume a world with no government while their behaviour is actually the key element. A more careful look into societies’ structure and who are the primary beneficiaries of exogenous revenue might help to gain new insights into the roots of the problem. n


38

Fall 2014

Rue in Rio, Sorrow in São Paulo (Unsuccessfully) Predicting the Winner of the 2014 FIFA World Cup Linkun Wan Columbia University

This paper was written several months before the 2014 FIFA World Cup. When our staff first read this article last spring, we were impressed by its solid model construction and bold theorizing, as well as by the topic itself. Amid sheafs of paper on drier subects, it was a relief to read a paper on something as universally beloved and understood as soccer. It was even more enjoyable to consider it at the level that the author considers the sport--by examining hundreds of data points to create a predictive model. Going into the World Cup, this paper’s prediction of Brazilian victory did not seem far-fetched. Brazil stood a very good chance of taking the trophy on their own turf, in fact, before a crushing (and somewhat shocking) defeat to the German national team put those dreams to rest. Reconsidering the piece this fall in light of this, many of our staff felt that the article should not be included, despite its merits, for making an incorrect prediction. We ultimately decided, however, that the theory and methodology were too worthwhile to abandon. By publishing this piece, we also hope to emphasize the limitations of regressions and models, no matter how bold or well-constructed they are. The real world (or the German national team, as the case may be) has a funny way of defying even our most carefully-constructed predictions—a point this article well illustrates. –V.S.

An estimated 26.29 billion people have watched all matches of the 2006 FIFA World Cup. However, as the world’s mostly widely viewed sport event, it has been attracting not only the attention of hundreds of millions football fans around the world through massive media coverage, but also the interest of economic researchers in an attempt for an in-depth analysis into the sport. Many interesting summaries of data from the World Cup have been raised. We see a clear pattern of differences in performance among different countries,

with the championship consistently concentrated among a few nations. A total of nineteen World Cup Championships have been awarded to eight different countries, while all the eight countries come from only two continents--South America and Europe. Brazil, which is also the host country for the coming 2014 tournament, has won five times, and it is the only country to have played in all the nineteen tournaments. Another interesting phenomenon worth exploring is the so-called “host nation effect” on team performance; a theory supported Columbia Economics Review

by the fact that six of the eight victor countries have won their championship in their home country, with the exception of Brazil. Also observed as a potential factor influencing the outcomes of games are penalty shoot-outs, which were first introduced to the final tournament of the FIFA World Cup in 1978. A further analysis shows that Germany has won the most penalty-kicks with a 100% win ratio in a sharp contrast to England who has never won penalty-kicks. With the expansion of interest in the FIFA World Cup, economic researchers have been employing stand-


Fall 2014

39

ard economic tools to analyze the sport thanks to the availability of data such as the FIFA World Ranking, the team performance during the game and information about the players. The economic effects of such a major tournament are numerous. People make bets on markets about the winners of each game, as well as the overall champion. Moreover, online markets games, so-called “fantasy markets” have enjoyed quite some popularity recently. People’s fascination about match predictions have also led to some economic research on teams performance in soccer and predictions of World Cup outcomes. This paper proposes two regression models identifying the variables influencing nations’ performances during the FIFA World Cup based on the data from five recent tournaments. The findings reveal that FIFA ranking points, host advantage and tradition are significant factors. This paper also forecasts the complete country rankings of the 2014 FIFA World Cup based on the two models. Model, Data and Method I start by looking for specific determinants that could impact a team’s success during a FIFA World Cup tournament, provided that the team has qualified for the final games. Andersson, Edman, and Ekman have suggested that experts’ experience is less accurate in predicting the World Cup outcome than simply applying the FIFA Ranking to the teams. A team’s relative strength is captured by the ranking points, and thus a stronger team would have a better performance. In Torgler’s (2004) research for the 2002 Korea/Japan World Cup, a team’s FIFA Ranking and home advantage are concluded as two important factors that affect its performance, together with some other in-game factors. In Hoffmann, Lee, and Ramasamy’s study, soccer tradition—captured by whether a country has hosted a World Cup before, is also found to be important in determining a team’s soccer success. This point is supported in Torgler’s (2006) paper, in which he concludes that whether a team has hosted a World Cup before also has an influence on its performance during a World Cup. No one has done a formal study on the topic of age and experience, but ESPN researcher Paul Carr has observed in his article that the average age of the winning team tends to be older, and the players have more experience in the World Cup. Therefore, it seems reasonable to include the aforementioned variables in the re-

gression. Since FIFA only started publishing its ranking points in 1993, I took the data from 1994 to 2010 World Cup tournaments for my econometric regression analysis. Before introducing the dependent variable, it is necessary to explain the structure of the recent World Cup tournaments and the procedure to determine the champion. The current final tournament features 32 national teams consisting of two stages: the group stage followed by the knockout stage. All the 32 teams (except for the host nation) are seeded based on their performance of FIFA World Rankings and recent World Cups and

Columbia Economics Review

drawn to eight groups. Each group plays a round-robin tournament. The top two teams from each group advance to the knockout stage. In the 1994 World Cup, which is also in my analysis, the group stage has 24 teams that were divided into six groups of four teams. The winners and runners-up for each group, as well as four best third-placed teams advance to the knockout stage. The knockout stage is a single-elimination tournament in which teams play each other in one-off matches, which is also featured by extra time and penalty shoot-outs if there is a draw within 90 minutes. It begins with the round of 16


40

In order to compensate for teams that advanced to the next stage/game, awarding one point for teams that played one more game helps eliminating some differences across groups and does better captures how the team ranked in that tournament. The penalty shoot-out adjustment gives one more point to the winning team in that match. In a FIFA World Cup game, a game in the knockout stage that is decided by a penalty shoot-out is counted as a draw. In order to distinguish the winning team while also acknowledging the losing team’s effort during the game (for tying the match), I award one point to the winning team. Last, the losing teams in the semi-finals enter a match to determine the third and fourth place, and thus they get to play the same number of matches as the champion and the runner-up (7 games in total). Based on different combinations of match outcomes, the second and third place may reverse their positions if we calculated their scores without any other adjustments. For example, if the runner-up wins the semi-final by a penalty shootout, while losing in a regular game to

Fall 2014

the champion, and the third place loses to the champion in the semi-final while wins the fourth place in regular games, then the third place will have one more point than the runner-up: runner-up gets 2 (semi-final) + 0 (final) + 2 (number of matches adjustments) = 4, whereas third place gets 0 (semi-final) + 3 (final) + 2 (number of matches adjustments) = 5. If we take out one point from the third place, then the two teams at least tie for the position. Instead of adding more points to the top two teams, the third and fourth place adjustments take off the awards for their extra game (thus -1 for both teams) in order not to skew the score distribution further up and create larger gaps between the top teams. There is one instance when the order is still reversed after applying the adjustments, but I choose not to adjust the scores for the third place further down in order to keep scores evenly spread out in most other cases, and not to mix the third and fourth places with other teams that lost in the quarter-finals. Once applied, these adjustments will help to maintain a correct order for the top four teams in most scenarios. Columbia Economics Review

As a result, the World Cup Performance Points give an overall representation of a team’s performance during any one World Cup tournament. Compared with some other ranking systems, this scheme is a better representation in several ways. The real ranking of a tournament lacks the ability to distinguish between different teams that lose in the same stage, and thus half of the teams will be tied at the same place because they lose in the group stage. Alternatively, one may take the real ranking and adjust it by round robin points, goal scored, etc. to determine a unique ranking order for every team. This is a better scheme than the previous one, but it will have issues with earlier tournaments in which only 24 teams, instead of 32 played. Such a system therefore has different upper bounds to the scores and will results in some problems in regression analysis. With the Performance Points scheme applied to 1994-2010 World Cup games, teams with the worst performance, those that lost all games will have a score of 3 for the games that they played. The best teams—the champions consistently get a score ranging from 25 to 28, depending


Fall 2014 on their specific performance during that tournament. Teams in the 1994 World Cup received similar points as more recent World Cup games because of similar format (groups of four teams, and knockout stage with 16 teams). A table consisting of the Performance We can observe from the data that most teams are correctly ranked with the Performance Points, with only one reversed order and a few tied positions. In 2006 World Cup, Germany ranks higher with Performance Points than France largely due to France’s two ties in the group stage. In 2010 World Cup, Spain and the Netherlands are tied for the top position using Performance Points due to the Dutch team’s phenomenal performance before the final game. However, the discrepancies between Performance Points ranking order and teams’ real order are small and thus the scheme can be used as a proxy for teams’ overall performance. The first independent variable is the FIFA/Coca-Cola World Ranking, which is published on the FIFA website (www. fifa.com). For each tournament, I used the most recent published ranking points right before that tournament as the most relevant measure. Currently, according to its published procedure, FIFA uses a scheme that gives teams points (again, three for a victory, one for a draw and zero for a loss) for each match that it participates in, weighted by the importance of the match, strength of the opposing team, and the average strength of the two confederations that these teams are from. FIFA takes into consideration of all matches played in the past 48 months, giving more weight to more recent games. This method provides good indications for a team’s strength, with stronger teams having higher points than weaker teams. Although sometimes blamed by fans, who claim that the score is flawed in applying biased weights to different confederations or too little (or too much) weight to friendlies, it is still the most official and widely available measure to approximate teams’ strength. This measure is analyzed by many of the authors previously mentioned in this paper, and deemed to be significant in indicating wins + 1 × draws + number of matches + penalty shoot-out adjustments + 3rd&4th places adjustmentsthe dependent variable according to my model), and the final. The dependent variable of this study is nations’ performance during past FIFA World Cup tournaments. In order to best approximate the performance, I use the

following equation to derive a score that describes how the team did during The two aforementioned simulations also almost solely rely on the ranking points as the variable. It is one of the most direct measures of a team’s recent performance and a very relevant data in my study. However, FIFA World Ranking has been updated twice since its debut in 1993, and the scale has gone up significantly. In order to standardize the scores, I choose the following method as to represent the ranking points: Adjusted ranking points = (points – minimum) / (maximum – minimum). With this method, the highest ranked team would have a score of 1 and the lowest ranked team would have a 0. All other teams will fall within the interval of (0, 1). This also gives a better measure of a team’s relative strength to the rest of the teams than using ordinal ranks. In this scheme, for example, if a team leads the rest of the participants by a large score difference due to great performance before the World Cup tournaments, this will be reflected by a large difference in adjusted FIFA ranking points. This will not be captured if we simply rank the teams using the official rankings.

“Historically, host teams have performed exceptionally well” The second independent variable shows whether a country is a host nation. Being a host nation gives a country many advantages. In addition to automatically qualifying for the World Cup Final tournaments without playing in the qualifiers, the host nation also enjoys the fans’ cheer, and being familiar with the stadium and the field. Sometimes referees may be affected by the fans and favor host teams. Historically, host teams have performed exceptionally well. In fact, of the 19 World Cup tournaments, the host teams have won 6 times. Moreover, except for South Africa in 2010, all other host nations have advanced to the knockout stage and 12 of the host teams have reached semi-finals. A table with past host nations and their performance is in the table to the left. Therefore, host nation has a clearly advantage in a World Cup tournament usually result in good performances, and Columbia Economics Review

41 should be included as a variable for the analysis. In the regression I use a dummy variable Host that gives 1 to the hosting nations and 0 otherwise. The third independent variable shows whether a country has hosted a World Cup before. Although hosting a World Cup requires years of infrastructure work and large investments, it gives the country great media coverage and helps to build a soccer tradition. A country that spends several years to build the stadiums and other infrastructure for the World Cup will also likely spend a large amount of resources in building sports facilities such as soccer fields in its communities. This will result in more people participating in the game and better talents for the future. Therefore, having hosted a World Cup may also impact a team’s performance due to the soccer tradition that the country has built. I use in this study a dummy variable Tradition that gives a value of 1 to countries that have hosted at least one World Cup tournament before and 0 otherwise. The fourth independent variable that I include in the regression is coach experience. It is common belief that having a good coach helps a team perform better. Coaches train these players prior to the World Cup games and teach them how to play such high level games. Moreover, they assign specific players to specific positions depending on the opponent’s lineups. In intense games such as the World Cup finals, an experienced coach may be able to offer great insights into the opponents’ strategies and what type of game the team should be playing. Famous coach Guus Hiddink is known for his achievements of bringing the Netherland and South Korea to the semi-finals in 1998 and 2002, respectively, and taking Australia to the round of 16 in 2006, its best ever performance in the World Cup. There is no doubt that coaches try to direct the matches as much as they can, and experienced coaches sometimes can turn a match around and help an unfavorable team defeat a stronger team. In the study, I use Coach_exp which shows the number of World Cup matches that a coach has participated in as the head coach. The data comes directly from the FIFA Data Management Group. In addition to coach experience, I also introduce a variable that captures teams’ experience. In competitive games, it is perceived that teams with more experience will dominate teams with less experience. Having played such games before increases a player’s confidence, and contributes to gametime performance.


42

As a result, players may be a peak at an age and if most of the major players in a team are in their prime, the team may have a better performance. The data I use is the average age for players at the beginning of a World Cup tournament, and I received the data from FIFA Data Management Group. Regression Results and Prediction In this paper I present two main models, one using a pooled OLS (Ordinary Least-Squares) technique and the other with country fixed effects. In the model selection process, I found time fixed effects to be insignificant and therefore removed them from my fixed effects model. I am using these two models to compare and contrast the effects of each variable and how they differ under a different model. It is concluded from the F-test for country fixed effects that these effects are significant, but due to the unbalance of data, I also think the pooled OLS model is relevant here. I use the models to decide which factors are crucial to successful performance in a World Cup, and will eliminate a few variables to arrive at the best prediction models for 2014 Brazil World Cup prediction. The following two equations illustrate my model of World Cup success. The variables used in the equations are also summarized in the table above. Pooled OLS: Performance pointsi = β0 + β1adj_fifai + β3hosti + β4traditioni + β5coach_expi + β6avg_minutesi + β7agei + β8age2i + ui

Fall 2014

Country fixed effects: Performance pointsit = β0 + β1adj_fifait + β3hostit + β4traditionit + β5coach_expit + β6avg_ minutesit + β7ageit + β8age2it + γ2E2 +…+ γ60E60 + uit I have provided a matrix scatter plot of all the variables here to show the correlations between any two of the variables, and address some of the multicollinearity issues in my regression. From the matrix scatter plot above, we find several interesting issues worth pointing out here that may further explain the functional form used in the models. First, adj_fifa and points have a very strong positive correlation, and that confirms what previous studies have examined. Second, it seems plausible that points and age follow a quadratic inverted U-shape with the teams peaking at somewhere between 27 and 28 years old. This confirms the functional form I use for age. However, it is notable that many teams with the same age are still bad performers, as we can see from the cluster of teams at the bottom. Several multicollinearity issues can also be observed here. First, teams with a soccer tradition (tradition variable equals 1) have significantly higher adjusted FIFA ranking points, suggesting that countries that countries that have hosted past World Cup tournaments and thus have soccer traditions typically have good performance during the qualifiers, friendlies, etc. Second, avg_minutes also seems to have a positive correlation with adj_fifa. This is also understandable because more experienced teams were good performers in previous World Cup games, and this translates to good game performance overall in the subsequent Columbia Economics Review

years (such as in the qualifiers). This correlation, however, is not attributable to those teams’ performance in the previous World Cup, because only the last World Cup tournament will be considered in the FIFA Ranking (in the 48 months assessment period) and these games are given minimal weights compared to the more recent qualifiers and friendlies. Third, avg_minutes and age have a positive correlation because in order to have the experience, typically players have to participate in multiple World Cup tournaments, and that means by the time players have accumulated much experience, they may be 8 or 12 years older than those players making their debut performance.

“teams that have hosted previous world cup games perform better on average than teams that have not” But we also have to notice that higher age is necessary but not sufficient for higher avg_minutes because many teams do not get qualified until many players have got older and gained match experiences elsewhere. Overall, there are few multicollinearity issues. Only the correlation between adj¬_fifa and tradition has exceeded 0.5 (0.55), and thus will be examined using F-test in my regressions to test joint significance. The regression results are displayed in the table to the


Fall 2014

left.

for participating countries. However, the

In regressions (1)-(3), pooled OLS is used, and country fixed effects are used in regressions (4)-(8). I made one adjustment in the data before running the fixed effects regression. Yugoslavia participated in the 1998 World Cup tournament, but then broke up into several countries. I adjusted the names and regarded Yugoslavia (1998), Serbia and Montenegro (2006) and Serbia (2010) as the same country. In addition to this issue that the countries’ names change, the fixed effect regression model also suffers from issue to unbalanced data. Since the countries that qualify for the World Cup change every time, there are many missing data

F-tests for country fixed effect in regressions (5)-(8) suggest high significance for such effects. Therefore I still include these regressions in the results as comparisons to pooled OLS results. Regression (1) suggests that adj_fifa is highly significant, and an increase of 1 will result in an increase in Performance Points by more than 8. In the context of World Cup, this means that the highest ranked team according to FIFA World Ranking will on average get 8 more points than the lowest ranked team. Taking into considerations of adjustments for number of matches played, 8.23 points translate into roughly three more wins if the same number of games is played, or

Note: *,** and *** denote significance at the 10, 5, and 1%-levels respectively

Columbia Economics Review

43 two extra wins in the elimination stage. That, of course is the difference between the two extremes of the teams. For other teams, adj_fifa does play some role in determining difference in performance, but not in such a large scale. Host, on the other hand, is also highly significant and gives the host team almost the same advantage and has a scale that is almost the same as the difference between the strongest and weakest team. Being the host nation provides the team with two more wins on average. Since the entire World Cup tournament for any team is no more than seven games, and half of the teams get eliminated after only three games, this advantage is very large and supports previous research results. In addition, having a soccer tradition also results in an increase of 3.44 points and the difference is highly significant. This means that teams that have hosted previous World Cup games perform better on average than teams that have not. In regression (2), I add two variables coach_exp and avg_minutes to my regression. Contrary to my prediction, both coefficients are negative. This means that coach experience and players experience are negatively correlated with a team’s performance. For coach experience, one explanation is that coaches are given new contract to coach teams only because they have done well in previous World Cup tournaments. Therefore, coaches with more World Cup experience may not be able to meet their previous achievements, because they have must have done very well to be able to continue coaching. Player experience can also suffer from the same problem. Players that contributed to the success of last World Cup are kept in the team, but they may not be able to produce the same performance and help the team succeed again. However, neither of the coefficients is significant, and should not be included in the final prediction model. In regression (3), players age is taken into consideration but age and age2 are not significant either individually or jointly. Surprisingly, the coefficient on age2 is positive, suggesting a concave up quadratic relationship. But as the statistics are not significant, the results need not be analyzed in much detail. All the previous results use heteroskedasticity robust standard errors, and therefore I eliminate other variables and take regression (1) from the pooled OLS as the best regression. In regressions from fixed effects, equation (4) shows that, in contrast to equation (1)-(3), FIFA Ranking points are nega-


44

tively correlated with a country’s performance during a World Cup. This means that certain countries are traditionally better performers than other countries and they also always have higher FIFA Ranking points. An example of such country is Italy, which arguably the one of best teams historically in terms of FIFA Ranking and a consistent high performer in World Cup tournaments. However, Italy won its 2006 title being the 11th ranked team based on FIFA points, and when it ranked number five with FIFA points in 2010, it finished miserably by getting eliminated in the group stage. Therefore, for each country, higher FIFA ranking does not necessarily translate into better performance. However, we also need to note that the coefficient on adj_fifa is not significant in this regression, and I decide to eliminate it in several other regressions as well as the final prediction model for World Cup success with country fixed effects. Interestingly, in regressions using fixed effects, avg_minutes become significantly negative. And an increase of 100 minutes

Fall 2014

for the squad will result in a loss of more than 2 points. This may be affected by the poor performance of several experienced teams such as Italy in 2010, but can also mean that players do perform worse four years later, after a good World Cup tournament previously. Once again, coach experience and players age do not play a significant role in teams performance, the same result as pooled OLS regressions. As previously mentioned, this dataset for fixed effect regression is unbalanced, since it is not always the case that the same teams get qualified for the World Cup final. In fact, Brazil is the only team that qualified for all previous 19 World Cup tournaments. Moreover, the reason that a country does not participate in the World Cup might be correlated with idiosyncratic error—unobserved factor that change over time and affect FIFA World Cup performance (such as other measures of team quality than FIFA Ranking), the results have potential risk of biases despite significant country fixed effects. Columbia Economics Review

“a higher fifa

ranking does not necessarily translate into better performance” As a result, I decide to include choose from both pooled OLS and fixed effects one equation to predict 2014 FIFA World Cup outcome. For pooled OLS, I choose regression (1) as the prediction model, as explained before. For fixed effects, the only regression with country dummy variable (regression (6)) will be used, since other regression does not help predict specific Performance Points for each country. The country dummy variables indicate how each country usually performs with respect to the first country in the dataset, which is Algeria. Countries such as Brazil and Germany have signifi-


Fall 2014

45 taken away the home advantage, Brazil may not rank as high as it is now in prediction 1. However, this prediction also has its flaws. As the host nation, Brazil automatically qualifies for the World Cup final, and therefore does not participate in the World Cup qualifiers, which contribute the bulk of FIFA points that other countries have. If Brazil were not hosting the World Cup games this year, its FIFA points may be higher and may not rank as low as people may suspect. As a result, both predictions suggest that the host team, Brazil will be most likely to win the World Cup. n

cantly higher coefficients for their country dummies than many other countries, because of their consistency and high performance in almost all past World Cup games. 32 countries have been confirmed to participate in the 2014 Brazil World Cup at the time of writing. By applying the coefficients from the regression results, a table consisting of the expected performance of countries is provided in Table 5, prediction 1 being the results from pooled OLS model and prediction 2 with the fixed effects model. With the exception of Bosnia-Herzegovina in prediction 2 (with fixed effects coefficients), all other countries are ranked using the Performance Points. The reason

for lack of metric for Bosnia-Herzegovina is that it has not participated in the World Cup before and therefore does not have a country dummy variable to be applied.

“Both results favor Brazil as the winning team� Both results favor Brazil as the winning team out of the 32 teams, largely due to the fact that they are the host nation. If Columbia Economics Review


46

Fall 2014

Food for Thought The Big Mac Index as a Proxy for Purchasing Parity

Simona Aksman New York University Purchasing power parity is a measure that captures the cost of living in a country by comparing local prices across countries for an identical basket of goods. It acts as a signal as to where exchange rates should be heading towards in the long-run.The Economist magazine came up with its own lighthearted version of PPP called the Big Mac Index, which compares the price of a Big Mac across countries to gauge whether currencies are overvalued or undervalued. The index gained attention both in the academic community and in the popular press, but the debate on its suitability as a proxy for PPP rages on. In this article, the author compares the measures empirically, using regression analysis to assess how strong the correlation between the Big Mac Index and PPP actually is. - A.N.

To some the symbol of American capitalism, to others a tasty dinner, McDonald’s BigMac is the most famous burger in the world. But can a combination of beef, lettuce, cheese, and onions on a triple layer sesame seed be tool for economic analysis? In this paper we test whether The Economist’s Big Mac Index (BMI) can serve as a proxy for purchasing power parity (PPP). To accomplish this, we first conduct a panel data econometric model based on that of previous researchers to compare both true PPP and Big Mac PPP against the theoretical criteria of PPP. This model involves two independent variables: the nominal exchange rate of a country and the US real GDP per capita ratio. We reject the joint null hypothesis of PPP theory for both true PPP and Big Mac PPP, showing that neither holds using the theoretical model. Still, we can compare the measures em-

pirically. A regression of Big Mac PPP on true PPP reveals that the Big Mac Index can serve as a proxy for true PPP. Purchasing power parity is the theory that a market basket of goods will cost the same amount of dollars across countries, and is calculated using the following formula: PPP = S = Pi/P*, where Pi is the price of a market basket in country i, P* is the price of the same market basket in the US, and S is the spot exchange rate, or the value of a domestic currency in terms of a foreign currency. The deviations from a PPP value of 1 provide a measure of nominal exchange rates, which are meant to move towards a value that equalizes PPP to 1 in the long run. The theory that a PPP value of 1 equalizes the value of a market basket across two countries is based on the law of one price derived under the assumption of no arbitrage in the long run. Over time, PPP Columbia Economics Review

acts as an anchor for exchange rates, and by extension, currency values. In theory, currencies that are overvalued will depreciate until they converge to 1, while those that are undervalued will move in the opposite direction towards 1. In September 1986, The Economist created its own measure of PPP in which the market basket was a Big Mac, a hamburger from McDonalds. This was created as a light and humorous take on PPP, in order “to make exchange-rate theory a bit more digestible”.1 Big Mac PPP is calculated by dividing the price of a Big Mac in country i by the price of the Big Mac in the country of base comparison (in our analysis, the US), just as was done for true PPP. Of course, there are issues inherent to this measure, as Big Macs can have different sizes, nutritional value and even ingre1 “Big Mac Currencies.” Editorial. The Economist. 21 Apr. 2001: n. pag. Web.


Fall 2014

dients depending on the country. Some countries have persistent differences in the makeup of their Big Macs, which can affect prices systematically.2 The Big Mac PPP also does not take into account differences in trade barriers or market competition. It does, however, include the value of non-traded goods because the price of a Big Mac directly reflects the price of service.3 Despite the limitations of the Big Mac PPP, the measure has proved to be a powerful forecasting tool. In 1999, The Economist used the Big Mac Index (BMI) 2 Rolfe, John. “Big Mac? Not Really, as Australian Version of Burger Downsized.” News.com.au. 13 June 2009. Web. 3 Pakko, M. R., and P. S. Pollard “Burgernomics: A Big Mac Guide to Purchasing Power Parity.” Federal Reserve Bank of St. Louis Review. 85: 9-27. 2003.

to correctly forecast that the Euro was overvalued and would depreciate, while other financial analysts forecasted the opposite movement of the currency.4 Our objective in this project is to test how well Big Mac PPP acts as a proxy for true PPP. A proxy is a variable that can be supplanted for the variable of interest. A simple example of a proxy would be using country of origin to determine race. In order to be a good proxy, there must be a high correlation between it and the said variable of interest. First we test two multivariable models: one for true PPP and one for Big Mac 4 Lutz, M. “Beyond Burgernomics and MacParity: Exchange-Rate Forecasts Based on the Law of One Price.” Unpublished manuscript, University of St. Gallen. 2001. Columbia Economics Review

47

PPP. The multivariable model compares the goodness of fit of the two PPP measures in relation to exchange rates and GDP per capita ratios. We include regional dummies in our model of true PPP and Big Mac PPP to test if regional effects improve the model’s accuracy or display any discrepancies between the two versions. Using the joint null hypothesis of Purchasing Power Parity theory, we accept or reject true PPP and Big Mac PPP to determine how well these measures hold up to the theoretical expectations. Once we establish the results for each, we regress Big Mac PPP on true PPP to see how these empirical measures compare. If the results of these measures are aligned, such that their regression displays a high correlation, we can conclude that Big Mac


48

PPP can act as a proxy. We do not take into account some inherent problems with PPP in this study. Some of the countries included may employ fixed nominal exchange rates even though PPP only holds with floating rates. PPP is also considered a long run indicator, and in our analysis we only look at a sample of a decade, which may be insufficient in showing meaningful trends. Lastly, we do not incorporate that exchange rates are dynamic since our data only reflects the exchange rate a single time annually and this may not best reflect its most accurate value for the year. Our paper is organized as follows: first we discuss literature that has been written on the topic of the Big Mac Index as

Fall 2014

well as PPP in general, next we explain our model, then we describe our data and regression results, and finally we draw conclusions. Literature Review Since its inception, The Economist’s Big Mac Index has garnered the attention of the general public and economists alike. Despite its joking nature, many economists have analyzed the legitimacy of this measure, and at least two dozen academic papers have been written on the topic of the BMI, often called Burgernomics. Though strength and usefulness of the BMI as a proxy for PPP continue to be debated in Burgernomics literature, empirical evidence shows that it can be used as a currency forecasting tool in some respects.

Burgernomics literature reveals that the consensus is split, however, on whether Big Mac PPP is even an efficient measure of PPP, not to mention how well it functions as a forecasting tool. Furthermore, PPP is in and of itself a controversial measure. Michael Pakko and Patricia Pollard’s 1996 article, “For Here or To Go? Purchasing Power Parity and the Big Mac” was among the first to analyze the Big Mac Index as a measure of PPP. Pakko and Pollard observed in this article “that the simple collection of items comprising the Big Mac sandwich does just as well (or as poorly) at demonstrating the principals and pitfalls of PPP as do more sophisticated measures”.5 In 2003 Pakko and Pollard updated their research in the article “Burgernomics: A Big Mac Guide to Purchasing Power Parity”, where they had more evidence of the same conclusion. Overall, Pakko and Pollard concluded that the Big Mac Index is a veritable measure of PPP, but PPP itself does not hold in theory. Reid Click contested Pakko and Pollard in his article “Contrarian MacParity”, and instead explained discrepancies in PPP via the Balassa-Samuelson Effect. He claimed that “the Balassa-Samuelson effect suggests that price of non-traded goods and services will be higher in highly productive, high income countries, and this may explain deviations from PPP,” and therefore, “the failure of PPP is due exclusively to time-invariant country effects”.6 He goes on to argue that PPP holds but conditional on the Balassa-Samuelson effect. Click introduced the model that we expand on in our analysis. Hiroshi Fujiki and Yukinobu Kitamura also borrowed Click’s model in “The Big Mac Standard: A Statistical Illustration”, where they concluded that the Balassa-Samuelson effect alone is not sufficient in explaining deviations from PPP,

5 Pakko, M. R., and P. S. Pollard. “For Here to Go? Purchasing Power Parity and the Big Mac.” Federal Reserve Bank of St. Louis Review. 78: 3-21. 1996. 6 Click, R. W. “Contrarian MacParity.” EconomicsLetters. 53: 209-12. 1996. Columbia Economics Review


Fall 2014

and that Big Mac PPP results are also sensitive to different models, sample periods, and countries. Fujiki and Kitamura note that the BMI is exceptional in that it “tests whether the relative prices of an identical basket of goods and services measured by a McDonald’s Big Mac, in terms of domestic currencies, is equal to nominal exchange rates in the financial markets in the long run”, whereas for the traditional measure of PPP, the common basket of goods changes over time.7 The Model Purchasing Power Parity theory implies a highly proportional relationship between nominal exchange rates and PPP values across countries. Real GDP per capita and PPP are also closely tied variables. The International Monetary Fund and World Bank use PPP calculations to derive real GDP per capita estimates. It is logical then that Click claimed that the following model explained PPP: ln(PPP) = α1 + β1ln(Eit) + β2(ln(RGDPit/RGDPt)) + 7 Fujiki, H., and Y. Kitamura “The Big Mac Standard: The Statistical Illustration.” Discussion Paper 446, Institute of Economic Research, Hitotsubashi University. 2003.

εit, where PPP is the price of a market basket in economy i in local currency divided by the price of that market basket in the US, Eit is the nominal exchange rate of economy i against the US dollar, RGDPit is real GDP per capita in economy i, RGDPt is the US real GDP per capita, with subscript i for country and t for time. Given as well are the null hypotheses that α1 = 0, β1 = 1, and β2 = 0. According to Click, these null hypotheses are inherent to the way in which Purchasing Power Parity theory works. We decided to take a look at the relationships in the data for ourselves to decide which variables to include when estimating PPP. In Chart 1, we plotted true PPP and Big Mac PPP against the two dependent variables used in Click’s model, Eit and RGDPit/RGDPt. These results show a very tight positive relationship between Eit and both true PPP and Big Mac PPP, while no clearly visible relationship appears to exist between RGDPit/RGDPt and either true PPP or Big Mac PPP. We conducted a test of incremental contribution to see whether adding RGDPit/ RGDPt to our model (in addition to Eit)

Columbia Economics Review

49

was significant.8 We used R2 values to perform the following F test: F = ((R2 new – R2 old)/df) / ((1 – R2 new)/df), where R2 new is the R2 of the multiple regression (with both Eit and RGDPit/RGDPt), R2 old is a simple regression with only Eit, df in the numerator is equal to the number of new regressors, and df in the denominator is equal to n minus the number of new parameters. Using the values from our regression, we estimated the result below: F = ((0.983698 – 0.974405)/1) / ((1 – 0.983698)/(286 – 2)) = 161.895. Our F value equals 161.895 and follows the F distribution with 1 degree of freedom in the numerator and 284 degrees of freedom in the denominator at a significance level of α = .05. This F value is highly significant, which suggests that adding RGDPit/RGDPt to the model increases the estimated sum of squares and so the variable should be included in the model. We therefore decided to adopt the model designed by Click and utilized by Fujiki and Kitamura. Since Fujiki and Kitamura’s research only covers through 2002, ours will cover an updated time pe8 Here, significance implies that it increased our estimated sum of squares.


50

Fall 2014 since both PPP and RGDPit/RGDPt are log-transformed ratios, a 1:1 relationship in that form is expressed by β2 = 0. The log transformations for the X and Y values are done to all variables in order to express the coefficients in terms of percentage deviations from a base value. For example, a PPP value of 1 is equivalent to 0 in log terms, denoting a 0% deviation. This is a more relevant way to express PPP and RGDPit/RGDPt as they are both ratios. For consistency and ease of comparison, Eit has also been log transformed. An additional benefit of utilizing the loglog model is that it linearizes results and thus more closely satisfies classical linear assumptions. In addition to the multivariable model we used to test true PPP and Big Mac PPP, we also used the following simple regression model, ln(Big Mac PPP) = α1 + ln(PPP) + εit. We included this model because we felt we could not make a conclusion about how much of Big Mac PPP is explained by PPP without a direct regression. This model measures the empirical PPP values against each other directly, while the previous multivariable model measured true PPP and Big Mac PPP against the criteria of Purchasing Power Parity theory. In this regression, dummy variables are not included.

riod through 2010, testing whether their conclusion that Big Mac PPP is a good proxy still holds. Since Click’s model requires the use of panel data, which includes both crosssectional and time series units, we chose to use the Fixed Effects Least-Squares Dummy Variable model. In this case, it is the most convenient to use, as it implies a differential effect amongst the cross-sections in our data. Unlike Fujiki and Kitamura, we did not also use the Random Effects model or test across various time periods, thereby implementing a simpler approach. We also chose to break up our countries into regional differentials, which Fujiki and Kitamura did not do. Regional differentials allow us to roughly test whether country-specific differences are persistent in PPP measures. We assumed that a differential country effect is present in the intercept coefficients while not in the slope coefficients in order to avoid over specification. Expressed below is the final model: ln(PPP) = α1 + α2ANZi + α3EUi + α4PRi + α5LAi + α6MAi + β1ln(Eit) + β2ln(RGDPit/RGDPt) +

εit, where i = 1,...,26 for 26 countries, t = 1,...,11 for years 2000-2010. This model is

created under the assumptions that E(εit) = 0 and Var(εit) = σε2. The dummy variables are defined as the following: Australia & New Zealand – ANZ, Europe – EU, Pacific Rim (Asia) – PR, Latin America – LA, Middle East & Africa – MA. Our dummy variable groups are uneven in size, with the MA dummy containing only one country while the PR dummy contains nine. This inconsistency may introduce insignificant results to our model. Along with the model, the joint null hypothesis that α1 = 0, β1 = 1, and β2 = 0 was adopted in accordance with Purchasing Power Parity theory. We made adjustments to include the dummy variable intercepts such that the null hypothesis for the intercept is α1 = ANZ = EU = PR = LA = MA = 0. This hypothesis assumes that the regional dummy intercepts will have no significant effect on the outcome of our model, such that PPP is a consistent measure regardless of regional factors. The null hypothesis that β1 = 1 falls in line with the directly proportional relationship we’ve seen to exist between nominal exchange rates and PPP. β2 = 0 also falls in line with our previouslyshown results that real GDP per capita and PPP are closely-linked measures, and Columbia Economics Review

Assembling Data To estimate Eit, RGDPit/RGDPt, and true PPP we found data from the Penn World Table version 7.1 for 2000 through 2010. The Penn World Table was created as a source for research at universities and international organizations, and the data contained is collected and extrapolated by the University of Pennsylvania’s Center for International Comparisons of Production, Income and Prices. However, there is a serious bias in this data in that it presents the Penn Effect, the empirically-observed phenomenon that higher income countries have consistently higher prices. This finding is the basis for the previously-noted Balassa-Samuelson Effect that systematic deviations from PPP are a result of higher income countries having higher productivity. As both true PPP and RGDPit/RGDPt, are calculated with this underlying effect, it is likely to be an issue for both measures. As for Big Mac PPP, we used data published in The Economist for the same years. Unlike the data in the Penn World Table, the BMI data set provided in The Economist is not a balanced panel, containing 409 observations for 57 countries, many


Fall 2014

of which have blanks in their data for many consecutive years. Data on Big Mac PPP for Euro-area countries and India, for example, is not available for any of the 11 years of our sample. We therefore limited our analysis to a sub-sample of 26 countries. This left us with a balanced panel. A balanced panel is preferable to an unbalanced one because the noise of individual observations is reduced. An unbalanced panel also introduces a host of issues that complicates analysis and may lead to violations of linear regression assumptions. The limitation of our data to create a balanced panel results in unbalance in our dummy variable region groupings. Because we have more cross-sectional units (countries) than time-series units (years), our balanced panel is a short panel. It includes the following countries: Australia, New Zealand, UK, Czech Republic, Denmark, Hungary, Poland, Russia, Sweden, Switzerland, Argentina, Brazil, Chile, South Africa, Mexico, Canada, United States, China, Hong Kong, Indonesia, Japan, Malaysia, Singapore, Korea, Taiwan, and Thailand. In Table 1, we have the mean and standard deviations for our two X variables, ln(Eit) and ln(RGDPit/RGDPt), the five regional dummy variable intercepts, and our two Y variables, ln(PPP) and ln(Big Mac PPP). Regression Results For the true PPP regression our results

(available in the appendices) show that the model is overall highly significant with an R2 value of .9837 and adjusted R2 value .9833. We tested our joint null hypothesis of α1 = ANZ = EU = PR = LA =MA = 0, β1 = 1, and β2 = 0 at a significance level of α = .05. With an F test statistic of 2394.5 that greatly exceeds the critical F value of 2.996 at our chosen significance level for 2 df in numerator and infinity df (closest to 283) in the denominator, we reject the joint null hypothesis. Despite our overall rejection of the hypotheses, the coefficients of the independent variables turn out to have values which seem to be very close to the expected values, especially for ln(Eit) and the dummy coefficients. A look at the individual variable t-statistics reveals that, for the α1, ANZ, EU, LA, and MA intercepts, we cannot reject the null hypothesis that the intercept is 0 at the α = .05 level of significance, though for the PR intercept we can. The t-statistics of ln(Eit) and ln(RGDPit/RGDPt) are highly significant at α = .05, so the null hypotheses that β1 = 1 and β2 = 0 are rejected respectively. Though the insignificance of the regional dummies falls in line with the theoretical expectations of the model and we accept the null hypothesis for all dummies aside from the PR dummy based on t-statistics, checking for multicollinearity in the independent variables is relevant given the high p-values of the dummy variables in conjunction with the highly significant R2 value for this regression. Another explanation for the insigColumbia Economics Review

51 nificance of these regional groups is that they are improperly specified. In our efforts to create a balanced panel, our analysis was limited to only 26 countries, and the resulting regional groups are uneven. Nevertheless, we conduct a simple test for multicollinearity by examining a correlation table for independent variables, given in Table 2It is visible that none of the variables have high multicollinearity, with the highest values in the table at .412. A more systematic test is to use the Variance Inflation Factor (VIF) rule of thumb, which hypothesizes that multicollinearity is a significant problem only if VIF > 10. Applying this rule to the centered VIF values, one can conclude that multicollinearity is not a significant issue since all VIF values < 10. There is, however, the possibility of other issues in the regression. Heteroskedasticity is possibly present in regression results since the residuals exhibit sections exceeding the homoskedastic spread, which is the range around zero denoted by the dashed lines in the chart. To test for heteroskedasticity, we used the White Test method for an N value of 286 and R2 value of 0.091, and found the following test statistic: N * R2 = 286*0.091 = 26.137. Given the null hypothesis of no heteroskedasticity, 26.137 follows a χ2 distribution with 7 degrees of freedom.9 Using a χ2 table at 7 df, we found that at the α = .05 significance level, χ2 (=26.137) ≥ χ2.05 (=14.067). Since the null is rejected when the χ2 value exceeds the critical χ2 value, we can reject the null in this case and conclude that heteroskedasticity is present in the data. This challenges our initial assumption that there is constant variance in the regression. Heteroskedasticity can be corrected for, and this is done after we check for autocorrelation in the regression. A check for autocorrelation is necessary since the residuals follow a rhythmic pattern around zero. The time-series nature of our model also suggests that autocorrelation may be a problem. We decided to use a rough measure known as the Durbin-Watson statistic to check for autocorrelation. Using the Durbin-Watson d value of 2.0973 from the regression output and a Durbin-Watson table for the α = .05 significance level, we found that for k =7 regressors and N = 200 observations,10 the lower limit of the Durbin-Watson zone of no decision was 1.697 and the upper limit was 1.841. Since 2.097 falls above this range, we can conclude that there is no 9 Given 7 regressors excluding the intercept. 10 We used N = 200 because it was the closest value to our sample size N value of 286.


52 first-order autocorrelation present in the regression. Since the Durbin-Watson statistic only provides information about first-order autocorrelation it is still possible that there is autocorrelation in this regression. To systematically correct for the known presence of heteroskedasticity and the possible presence of autocorrelation, we ran a Newey-West fixed regression, which corrects standard errors and t-statistics for these effects. We can use this measure because there is a sufficiently large sample size of N = 286 observations. The Newey-West results for the PPP regression exhibit more extreme t-statistic values and therefore higher levels of significance for the dummy intercept coefficients, while less extreme t-statistics and lower levels of significant for the two slope coefficients. Overall, the dummy intercept coefficients were still insignificant except for the PR intercept, but the model is still strong in its slope coefficients at a significance level of α = .05. As the standard errors and t-values of the Newey-West values are overall more robust than the OLS values, we have chosen to include them in this study. As we have seen, our PPP data fails to meet Purchasing Power Parity theory’s hypothesis. We next run an analysis of Big Mac PPP against the same criteria to see how it compares. The OLS results were overall highly significant, with an R2 value of .9867 and an adjusted R2 value of .9863. Given an F value of 2952.462 (greatly exceeding the critical F value of 2.996), we reject the joint null hypothesis that α1 = ANZ = EU = PR = LA =MA = 0, β1 = 1, and β2 = 0 at a significance level of α = .05. Again, though the hypothesis is overall rejected, the coefficients of ln(Eit) and the dummies are very close to the hypothesized values. An examination of the tstatistics shows that the ANZ intercept is significantly different from 0 at the α = .05 level, while MA is significant at the α = .10 level, and PR at the α = .01 level, while insignificant for the reference dummy intercept, EU and LA. Overall, the dummy variables would still fail to meet the null hypothesis criteria. The null hypothesis results show that Big Mac PPP also fails to meet the theoretical requirements of Purchasing Power Parity Theory. Though we cannot meaningfully compare the R2 of the BMI regression to the true PPP regression (because the dependent variables vary and are the source of model differences), we can regress one dependent variable on another. This will show whether there exists a directly proportional relationship,

Fall 2014 as we would expect a priori. We can also comment on the goodness of fit of the Big Mac PPP regression to theoretical expectations, which were also used for comparison against true PPP. As we did before for the true PPP regression, we looked at the Big Mac PPP residuals to check for heteroskedasticity and autocorrelation. In the residuals from ln(Big Mac PPP), a similar pattern emerges for the Big Mac PPP as was present for the true PPP, with spread exceeding the range around zero denoted by the dashed lines in the chart. Again, we used the White Test method. Given the null hypothesis of no heteroskedasticity, 42.843 follows a χ2 distribution with 7 degrees of freedom. Using a χ2 table at 7 df, we found that at the α = .05 significance level, concluding that heteroskedasticity is present in the data.

“That big mac ppp appears to be regionally dependent while true ppp does not leads us to believe that true ppp is a less biased measure” We checked for autocorrelation as well, since the residuals from ln(Big Mac PPP) follow a rhythmic pattern around zero (similar to that of the residual pattern for true PPP), which is often an indicator of autocorrelation. Using the Durbin-Watson d value of 1.976 from the regression output and a Durbin-Watson table for α = .05, we found that for k = 7 regressors and N = 200 (closest value to 286), the lower limit of the Durbin-Watson zone of no decision was 1.697 and the upper limit was 1.841. Since 1.976 falls above this range, we can conclude that there is no first-order autocorrelation present in the regression. Again, we included Newey-West fixed regression results to correct for the heteroskedasticity issue and higher-order autocorrelation not caught by the DurbinWatson statistic. This corrected regression has stronger t-statistics than the OLS version for all coefficients except for MA. The dummy variables for ANZ, EU, MA, and PR are now all significant at the α = .05 level, whereas before only the ANZ and PR intercepts were significant. There were no changes to the significance of the slope coefficients. Finally we directly regressed ln(Big Mac PPP) on ln(true PPP). This regression proColumbia Economics Review

duces an R2 value of .991 and an adjusted R2 value of .991, which suggests a highly significant relationship between the two measures of PPP. The plot of these two variables against each other displays an approximately 1:1 relationship. We conclude that Big Mac PPP and PPP are closely enough related measures to make the claim that Big Mac PPP acts as a good proxy for PPP. Summary and Conclusions In our study we reject our joint null hypothesis for the multivariable regressions of true PPP and Big Mac PPP against PPP theory criteria, as did previous researchers. However, we still conclude that Big Mac PPP is a suitable proxy for true PPP based on the close empirical relationship of Big Mac PPP and true PPP. Breaking down the countries into regional dummy intercepts improved the significance of the Big Mac PPP model when tested against theoretical PPP criteria, while these regional groups proved to be rather insignificant for the true PPP measure. Possible explanations for this include the existence of regional differences in Big Macs and wrongly-specified dummies, a result, perhaps, of our small sample size of only 26 countries. The fact that Big Mac PPP appears to be regionally dependent while true PPP does not leads us to believe that true PPP is a less biased measure, though again this effect may be negligible and due to the specification of our dummy variables. Comparing the residual charts for true PPP and Big Mac PPP, it is visible that these measures follow a close pattern. This similarity is also true for the plots of dependent versus independent variables for the two PPP measures. Since PPP is typically useful as longrun indicator, both true PPP and Big Mac PPP could have failed the joint hypothesis because our sample period is too small to show long run trends. Past research shows that it is already controversial whether PPP is useful in the short run. Despite the failure to meet the joint null hypothesis of the theory, true PPP and Big Mac PPP nevertheless move together in an empirical pattern. The regression of Big Mac PPP against true PPP revealed an R2 of 0.991, indicating a very strong correlation. We conclude that by the simple criteria of a proxy our results illustrate that Big Mac PPP is a suitable proxy for PPP despite its failure to adhere to PPP’s theoretical requirements. n


Fall 2014

53

Special Feature: an interview with prof. perry mehrling America, the ECB, and Charles P. Kindleberger

Julie Tauber Columbia University

Professor Perry Mehrling teaches the popular Barnard College course Money and Banking, which has over 120 students registered this fall. His research focuses on the foundations of monetary economics and the history and applications of monetary economics and finance. Professor Mehrling is currently a member of the Board of Directors of the Eastern Economics Association, and a member of the Economists Forum at the Financial Times. Senior Editor Julie Tauber sat down with Professor Mehrling and asked him about the American economy, forward guidance for the European Central Bank and his upcoming book about economist Charles P. Kindleberger.

Julie Tauber: I’d like to pick your brain about the current state of the American economy. [Chairwoman of the Federal Reserve] Janet Yellen said a couple of weeks ago “the labor market has yet to recover fully.” Other economists believe the U.S. is in a better state than she lets on. What’s your take? Perry Mehrling: Well, I think it’s definitely true that there are signs of recovery. My wife is a headhunter, so she’s a leading indicator.When her phone is ringing off the hook,that’s an indicator I look at. But I think the recovery in the U.S. is also uneven. It is in different industries and different sections of the country, and that’s the sort of thing that Yellen concerned about.

JT: On a similar note, do you think the Fed has waited too long to raise rates, or do you think raising rates in 2015 is appropriate timing? PM: The interest rates,which I’m more concerned about,are not so much money market rates but capital market rates. I have advocated that in terms of the socalled “exit strategy,” that the Fed exit from its positions in duration risk and credit risk before it starts to move money market rates. That doesn’t seem to be what they are doing. JT: The Wall Street Journal reported recently that much of the interest rate payments made by the Federal Reserve go to foreign banks holding reserves at the Fed. From your perspective, does this matter at all to us? Columbia Economics Review

PM: No. Because, these foreign banks could be holding Treasury Bills instead. And they would be getting interest on Treasury Bills that’s paid by the government – I don’t see that there’s any economic difference between those two. JT: Interesting. When gauging the state of our economy, economists often look at real GDP growth and unemployment. Yellen always cites many different economic indicators at press conferences. Which indicators do you think are most important to look at? PM: Well, you’re asking about indicators mostly about the business cycle. But I would draw your attention to indicators that are more about welfare and how people are doing. Indicators like longterm unemployment, for example, or la-


Fall 2014

54 bor force participation – people dropping out of the workforce after they are unable to find jobs. I would draw attention to measures of income distribution and distribution of wealth. Economists do pay attention to these things, typically. They’re not the headline news – maybe they should be. JT: Alan Greenspan told WSJ MarketWatch in July that he “was always doubtful during his tenure about how much the Fed could effectively communicate to the market, because they were always second guessing the Fed.” How important do you think Fed communication and forward guidance is?

“The Fed is not the

central bank of the world” PM: Well, I am not in favor of the forward guidance that we have had so far. It seems that telling the market exactly what you’re doing is giving the market information that allows it to game you and to make money at your expense. I understand the rationale behind it – it comes from particular models of economics in which future expectations cause behavior today. But in terms of how it’s actually playing out, this is about speculation – that is not in most of those models. JT: On September 18th the European Central Bank started “two special lending operations ‘targeted’ long-term loans” and the buying of covered bonds and asset-backed securities. Do you think these programs will have a profound impact on the European economies – will they improve from this? PM: Well, this is QE– quantitative easing – which Europe is trying to do. The ECB is interested in trying to get credit going again, and they see taking securitization markets going again as the key. European regulators have been preventing securitization from getting going by creating capital requirements for people that want to hold these securities.So the ECB is trying to step in and push the other way by saying – We’ll buy them! But if you’re fighting against your own regulations, you’re just going to put a lot of stuff

on the balance sheet of the ECB. And the ECB will have the kind of exit problem that the Fed is facing right now. JT: You gave a talk titled “The Emergent New International Monetary System” at the Asian Economic Community Forum in Korea a few weeks ago. Can you tell us, in layman’s terms, some of the key points addressed in your talk? PM: This talk came out of some research I was doing at the internationalization of the renminbi, the Chinese currency which may get approval from the [International Monetary Fund] soon. The way I approached that the question was to ask, for China, is what is the international monetary system that you’re trying to internationalize into? What is this system that you’re trying to figure out how to engage? And here’s the main point: it’s a dollar system. But the Fed is not the central bank of the world. The central bank of the world is the consortium of the top six central banks: the Fed, the ECB,the Bank of Japan, the Bank of England, the Bank of Canada, and the Swiss National Bank. Just about a year ago, they extended the swap lines between these central banks that were created during the crisis. Now it is not just for crisis. It is permanent swap lines, of unlimited size, between the six largest central banks. The C6 as I call them are positioned at the top of the system. Now the BRICS Bank (Brazil, Russia, India, China, and South Africa) that’s going to be in Shanghai is also something China is keen on. But you need to think –where are the individual countries going to fit this emerging structure? It is a framework that I’m urging for understanding. JT: So when you say at the end of your presentation “Forget the G7 (or the G20), watch C6” – What exactly are you saying there? PM: What I’m saying is that the G7 or the G20, typically these are meetings of finance ministers, treasury ministers and the fiscal authorities of the top seven or the top 20 countries. I’m saying that it’s the central banks that are important. But that also means that it’s monetary policy, it’s the central bankers – not the treasury officials – who are backstopping the global system at the moment. I think that is a sign of trouble. You want to have an economy where central bankers are not that

Columbia Economics Review

important. They become important when the system is under tremendous stress – times of war, times of the financial crisis. I think there’s a lot of stress in the world system. So it’s good that the central banks are noticing and taking responsibility for this.But it is because of the challenges that we’re facing, and because other institutions aren’t working that well. JT: You’re also in the middle of writing a biography of the economist Charles P. Kindleberger (1910-2003). What in particular drew you to this economist? PM: Well, I wrote a book during the financial crisis – The New Lombard Street. It was a kind of history of the development of the Fed from 1913 in its birth through the various challenges of depression and war.And then it ended with the financial crisis. It was an attempt to create some institutional and historical framework for understanding the financial crisis. What’s missing in this book is, by focusing on the Fed, that this is a global financial crisis. I thought my next book should be that same time span, but be a sort of a biography of the dollar, but the dollar doesn’t have any personality. Then I found Charlie Kindleberger. I thought – Here’s a guy that was born in 1910, and lived through the whole century! So I could hang this whole story on the life and times of Charlie Kindleberger – maybe.

“it’s the central bankers – not the treasury officials – who are backstopping the global system at the moment” Then I looked around and discovered that, in fact, this is a very interesting story. There’s an arc of a life there. It’s a biography about the international monetary system through the life of an international monetary economist. In that regard, it’s more like my book on Fischer Black, which is the story of modern finance. It is about the rise of modern international monetary theory. So that’s the idea. He’s a – well you’ll see, you have to wait for the book! n


COLUMBIA ECONOMICS REVIEW

Call for Submissions Columbia Economics Review is interested in your article proposals, senior theses, seminar papers, editorials, art and photography.

GUIDELINES CER is currently accepting pitches for its upcoming issue. You are encouraged to submit your article proposals, academic scholarship, senior seminar papers, editorials, art and photography broadly relating to the field of economics. You may submit multiple pitches. CER accepts pitches under 200 words, as well as complete articles. Your pitch or complete article should include the following information: 1. Name, school, year, and contact information (email and phone number) 2. If you are submitting a pitch, please state your argument clearly and expand upon it in one brief paragraph. List sources and professors/industry professionals you intend to contact to support your argument. Note that a full source list is not required. 3. If you are submitting a completed paper, please make sure the file is accessible through MS Word. Pitches will be accepted throughout the fall and are due by February 1st, 2015. Send all pitches to economics.columbia@ gmail.com with the subject “CER Pitch- Last Name, First Name.� If you have any questions regarding this process, please do not hesitate to e-mail us at economics.columbia@gmail.com. We look forward to reading your submissions!

Columbia Economics Review 1022 IAB, 420 West 118th St, New York NY 10027 | (609) 477-2902 | econmag.org | economics.columbia@gmail.com


Economicus & Journal O n li n e a t

EconMag.org Read, share, and discuss Keep up to date with web-exclusive content

Columbia Economics | Program for Economic Research Printed with generous support from the Columbia University Program for Economic Research


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.