Columbia Economics Review: Spring 2016

Page 1

Columbia Economics Review

Moore Money, Moore Problems Art of the Deal Cash for the Cure Federal Reservations Outwit, Outplay, Outlast One Pay or Another

Vol. VII No. II Spring 2016


2

Spring 2016

COLUMBIA ECONOMICS REVIEW PUBLICATION INFORMATION Columbia Economics Review (CER) aims to promote discourse and research at the intersection of economics, business, politics, and society by publishing a rigorous selection of student essays, opinions, and research papers. CER also holds the Columbia Economics Forum, a speaker series established to promote dialogue and encourage deeper insights into economic issues.

2015-2016 E D I T O R I A L B O A R D EDITOR-IN-CHIEF

Eitan Neugut

JOURNAL SENIOR EDITORS

MANAGING EDITOR

Carol Shou

PUBLISHER

Ben Titlebaum

Francis Afriyie Michael Greenberg Larry Xiao LAYOUT EDITORS

Minzi Keem Ambika Mookerjee

STAFF EDITORS

Sungmin An Jakob Brounstein James McCarthy Manuel Perez Shambhavi Tiwari Sharleen Yu Mitchell Zhang Jessica Bai

CONTRIBUTING ARTISTS

Letty DiLeo (Cover) Kevin Jiang Maham Karatela Ching Wen Wang

ONLINE CONTRIBUTORS

EXECUTIVE EDITOR

Max Rosenberg

WEB DIRECTOR

Evangeline Heath

WEB EDITOR

Kevin Jiang

Guillermo Carranza Jordan Arnold Lee Catalina Piccato Ojaswee Rajbhandary Eli Olmstead

Natasha Przedborski Vivianne Bai Vivian Casillas Tian Weinberg Paul Nguyen Lindsay Manocherian Andres Rovira

OPERATIONS OPERATIONS MEMBERS

EXECUTIVE DIRECTOR

Raymond de Oliveira Alan Lin

Pranav Mohan Balan Boya Wang Zoey Chopra Bryan Li Boya Wang Jing Qu

SENIOR TEAM OPERATIONS MEMBERS

Daniel Morgan Chris Sabatis

A special thanks to the Columbia University Economics Department for their help in the publication of this issue.

Columbia Economics Review would like to thank its donors for their generous support of the publication.

We welcome your comments. To send a letter to the editor, please email: econreview@columbia.edu We reserve the right to edit and condense all letters.

Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License

Columbia Economics | Program for Economic Research

Printed with generous support from the Columbia University Program for Economic Research Columbia Economics Review


Spring 2016

TABLE OF CONTENTS

On Campus 6

Moore Money, Moore Problems

Best of the Online Content

Art Economics 8

Art of the Deal

Reevaluating Economic Conditions of Art Production

Healthcare 15

Cash for the Cure

An Analysis of What Impacts NCI Funding and the Significance of That Effect

Finance 24

Federal Reservations

Macroeconomic Cycles and the Stock Market’s Reaction to Monetary Policy from 2000-2015

Labor Economics 34

Outwit, Outplay, Outlast

Determinants of Employment Tenure

Immigration 40

One Pay or Another

The Wage Effects of Low-Skilled Immigration: A Panel Analysis

Competition 50

Environmental Policy Competition Winners

For a complete list of papers cited by our authors and a full version of all editorials, please visit our website at columbiaeconreview.com

Opinions expressed herein do not necessarily reflect the views of Columbia University or Columbia Economics Review, its staff, sponsors, or affiliates.

Columbia Economics Review

3


4

Spring 2016

COLUMBIA ECONOMICS REVIEW

Call for Submissions Columbia Economics Review is interested in your article proposals, senior theses, seminar papers, editorials, art and photography.

GUIDELINES CER is currently accepting pitches for its upcoming issue. You are encouraged to submit your article proposals, academic scholarship, senior seminar papers, editorials, art and photography broadly relating to the field of economics. You may submit multiple pitches. Your pitch or complete article should include the following information: 1. Name, school, year, and contact information (email and phone number) 2. If you are submitting a pitch, please state your argument clearly and expand upon it in one brief paragraph. List sources and professors/industry professionals you intend to contact to support your argument. Note that a full source list is not required. 3. If you are submitting a completed paper, please make sure the file is accessible through MS Word. Pitches will be accepted throughout the Fall and are due by September 15th, 2016. Send all pitches to econreview@columbia.edu with the subject “CER Pitch- Last Name, First Name.� If you have any questions regarding this process, please do not hesitate to e-mail us at econreview@columbia.edu. We look forward to reading your submissions!

Columbia Economics Review


Spring 2016

A LETTER FROM THE EDITORS Dear Readers,

As the academic year draws to a close, we are excited and honored to present our Spring 2016 issue. Given that the wide breadth of content, methodology, and style in this year’s submissions, we tried to capture the essence of different modes of economic thinking and analysis. With the popularization of data science and pop economics, the need for economic frameworks and rigorous quantitative methods to interpret the new potential of big data has permeated into every sector of academic and everyday life. In this mindset, we focused on submissions that sought to bridge two seemingly unrelated topics together with economic reasoning in pieces that were equally quantitatively rigorous and qualitatively perceptive. Through economic methods, academics can postulate and confirm the incentives that motivate individuals to act the way they do in economic settings. In a world where every action is incentivized, either implicitly or explicitly, the reach of economics is bounded only by the methodology that we use to interpret them. Crossovers between computer science and economics have worked to harness the potential of high-powered computing and algorithmic modeling to simulate complex and dynamic marketplaces; behavioral economists mix economics, psychology, and neuroscience to theorize and test how people internalize the decision-making process; and economics has even made its way into literature, where the use of suspense and other rhetorical devices has been studied as a scarce resource to be allotted sparingly over the course of a book or series. While the potential for economic thinking is every growing and become an increasingly popular mode of analysis, our hope is that we can still highlight the specific methods and instruments that uniquely place our pieces in the discipline of economics rather than general quantitative analysis. The attention on individual incentives in terms of monetary maximization and self-rationality, market assumptions about firm behavior and profit maximization, and perceptual regularities in decision-makers all exist in unique economic frameworks that create a shared language and starting point for future and current economists to innovate. As our authors continue to embark in interesting and distinct disciplines, we applaud both descriptive industry insight and empirical analysis in proposing and subsequently defending their claims. Here at Columbia, our Core Curriculum has taught generations of Columbians to discuss seminal works in Western literature and civilization through the tireless effort of questioning, interpretation, and discussion. We hope to bring this passion for doubt, exploration, and realization from the Columbia community to the discipline of economics, and we would like to think that our authors, who are all Columbia alumni or undergraduates, have brought this same sense for critical analysis to their pieces. Perhaps the one frontier that the Columbia Economics Review has not addressed in this issue or its past issues is the field of experimental economics. While we have incredible empirical pieces that aggregate information which may have been experimentally deduced, we have not seen a piece that posits a question and conducts analysis completely within the framework of an original experimental design and analysis. As we push our readership and authors to increasingly diversified areas of economics, it is our hope that we can continue to seek new models and methods of testing that highlight the variety of experimental and empirical tools available to budding economists and academics. Our purpose is not only to educate our readership on the importance and diversity of economic topics and thought processes, but also to encourage our readers to discuss how economics can be applied to solve problems of both an academic and everyday nature. Our hope is that this issue challenges our economics enthusiasts and inspires those with a less technical economic background. As we look forward to exploring the new stomping grounds of economics in the pages to follow, we hope to keep up the conversation with you through our online platform, CER Online. From everyone on the CER Editorial Board, it has been a pleasure. We hope that you enjoy this issue as much as we have enjoyed putting it together. Cheers, Carol Shou CC’17 | Managing Editor Eitan Neugut CC’16 | Editor-in-Chief Ben Titlebaum CC’19 | Publisher

Columbia Economics Review

5


6

Spring 2016

Moore Money, Moore Problems Guillermo Carranza Jordan Columbia University

We are proud to showcase one of the most popular and perceptive pieces from our online platform. With Butler library serving as one of the most iconic landmarks on Columbia’s campus, new plans to install Henry Moore’s“Reclining Figure” in front of the library have met their fair share of praise and protest. While Henry Moore’s sculptures regularly adorn prestigious museums such as the Tate and the Metropolitan Museum of Art, Columbia students have largely decried the positioning of the modernist sculpture immediately facing the neo-Classical facade of Butler library. As a gift to the university, “Reclining Figure” reinvorgates a ripe debate surrounding the architecture and aesthetics of Columbia’s campus, the respect that Columbia owes its alumni and donor, and the instrinsic and monetary value of art. -C.S.

In the past couple of weeks, a sizable part of the Columbia student body has erupted in anger -- this time over the public art that adorns our campus. After the administration began to install Henry Moore’s “Reclining Figure 1969-70” in front of Butler Library, in the dead center of the “postcard” view of campus, many students responded by writing harsh op-eds in the Spectator and over 1,200 students signed a petition against the statue. These students were upset that the statue was announced only through an obscure school blog, even though the statue is widely seen as a major addition to the main campus lawn, and that the statue is just plain ugly, or at least in harsh contrast to the neoclassical buildings of the surrounding vista. In turn, the student reaction garnered the attention of many major newspapers and national media outlets, from the New York Times to the BBC, and snowballed into a major issue on campus and beyond. Several commentators expressed astonishment for the disdain Columbia students hold for the works of one of the most important sculptors of the last century, especially given the exorbitant price tag of the sculpture - valued

at around $5 million. Ironically, in 2005, thieves stole another copy of the same Moore sculpture and sold it for its scrap value of 1,500 pounds, unaware and unbothered by its intricate valuation made by the Tate Museum. At the heart of this discrepancy of value between thieves, Columbia students, and outraged critics is a fundamental question: What is the value of art and how is it possible to encounter such different value assessments? Let’s start with the price tag. Valuing art economically is not an easy feat. The price of an art commodity does not reside in the price of its materials, or the number of hours of labor the artist put into it. Robert Hughes, an art critic for Time in the 1970s, once declared: “The price of a work of art is an index of pure, irrational desire.” The notion that the art market is irrational implies that the value of art is intrinsically subjective, and therefore, difficult to capture with a price tag. Price irrationality sounds great, especially for the type of art lovers that would love to protect their craft and passion from the forces of the market. However, art pieces do have prices, sometimes astronomical ones as seen in the auction rooms of Christie’s and Sotheby’s. The Columbia Economics Review

most obvious explanation is to consider art pieces as goods managed by the forces of supply and demand, just like any other. The high prices of renowned pieces then can be explained by their uniqueness and their high demand. Rich art investors and connoisseurs had to outspend each other to compete for the 6 copies of Reclining Figure that Moore made, resulting in the market-determined price. Can’t this price be considered an objective valuation? However, the mechanics of the art market do not exist in a vacuum. Prices are not determined freely by consumers, leading to an objective value, but rather, prices are undoubtedly shaped by experts and the cultural context in which these transactions take place. For example, Van Gogh’s artwork was worthless during his lifetime, leaving him penniless in death -- an experience shared by many now-renowned artists -- but nowadays, his art is considered masterful and priceless. As subjective as art can be, artists cannot simply impose their own price to their work. In terms of how art markets operate, art experts are fundamental in establishing prices in the “primary market,” the market for pieces that have never been auctioned or sold before. The


Spring 2016

dealers of galleries are extremely careful in their valuations of new art, considering these prices cannot simply be lowered because it would decrease the reputation of both the artist and the gallery. The initial valuation also affects the price in the “secondary market”, where pieces are auctioned or traded. An overshoot in the initial valuation could hinder the prospects of sale in the future. Furthermore, art experts work in the market not only by establishing prices themselves, but also by determining the characteristics that ultimately lead a piece of art to be considered good, bad, or great. As Chicago professor David Galenson argues, “Studies of auction results also reveal that it is the greatest artists whose work commands the highest prices. And studies of auction prices clearly demonstrate that it is the most important periods of great artists’ careers’ that bring the highest prices.” The prices of art are more rational than art critics would like the public to believe. Not surprisingly, it is the most renowned, most famous works that bring the biggest prices. The sphere of art critics, art historians and gallery valuation experts have a strong influence on the pricing of art pieces. They decide the prestige of each artist and each work, which in turn decides their market prices.

Finally, the price determined by artwork when it is sold at market does not necessarily equate to its true value for another reason: Economists believe the art market may be in the midst of a bubble. Art is viewed as a stable commodity, and in recent years investors have been flocking to purchase artwork as a store of value. The prices of art have been driven up and up due to speculation in the market. Therefore, the price of art is not just the aesthetic value, but also includes the extra value investors are willing to pay for its financial properties and the bubble that this speculation has caused. And, a study in the Journal of Empirical Finance this year predicts that this bubble may be on the verge of bursting. The impressive price of the Moore sculpture might not be indicative of its aesthetic value because it also incorporates the temporary bubble that the art market is now experiencing. Where does this leave us? The valuation method of art commodities, like Reclining Figure, is very much restricted to spheres of critics, dealers and connoisseurs. This unilateral form of valuation hinders a general understanding of the value of these art commodities in society. This is why the Columbia community outside of the Art History department has a hard time understanding the value of the piece of Columbia Economics Review

7

art beyond their own aesthetic valuation. The bottom line is very simple: The price tag of an art piece is a poor way to assess its value. Students do not necessarily agree with with the valuations of well-meaning but detached art insiders. The value of the sculpture should not be unilaterally imposed on campus based on its price tag, but should be judged by the subjective opinion of the community. Students are angry with the lack of communication surrounding the statue because it prevented the student body from determining its own valuation. The main issue with the installation has been the lack of dialogue around it, which prevents us from giving our own shared value to art and to the limited space on campus. Town halls and surveys could help alleviate the tension by helping the Columbia community arrive at its own value assessment. On the other hand, and perhaps in a more open-minded way, it is definitely possible our understanding of Moore’s Reclining Figure will change in the future. We might even grow to give more value to the sculpture once it has been installed, as a defining and quirky part of Columbia’s campus with an interesting story behind it. After all, perhaps that is the kind of subjective value its $5 million dollar price captures. n


Spring 2016

8

Art of the Deal Reevaluating Economic Conditions of Art Production Mira Dayal Columbia University

Recently, the rise in popularity of popular economics works like Freakonomics has heralded a refreshing new wave of democratization to the ivory-towered discipline of economics. In a similar, light-hearted manner, Dayal attempts to apply the analytical tools used by economists to examine the production decisions of artists who must choose between producing for cultural or financial gain. Though the traditional school deems the answer to be all-or-nothing, the commodification of artists and their “brands” have risen as a viable solution to this difficult question. This new trend in the art market, pioneered by the likes of Takashi Murakami, is not without its critics, but its potential to revolutionize the art world by bridging the gap between “high art” and “commodity art” cannot be underestimated. While not particularly intensive in its use of equations and economics jargon, this article shows the field’s relevance to everyday life and is a pleasant read for all. -S.A.

Introduction Both economists and art historians have written about the links between the rise of capitalist market economies and the production of art and culture. For economists, this link is interesting because artists must have a source of income in order to function as artists, but the supply of artists tends to be much larger than a reasonable estimate for the number of (financially) successful artists, and thus most artists cannot sustain themselves through art alone. How do artists negotiate the balance between producing art for both cultural value and pleasure, versus for economic value, and why do they choose to become artists at all? For art historians, the reason for investigating this link is twofold. First, artists clearly engage with mass culture and incorporate serial production techniques into their art; socioeconomic environments influence artists,

and the work that artists produce is often incorporated back into mass culture. Second, movements such as pop art, land art, and minimalism explicitly challenge the commodification of art and the process of designating cultural and economic value. Underlying all of these investigations is the question of how artists produce art, namely, how much relative emphasis they place on the production of art for economic value versus cultural value. This paper aims to weave together narratives put forth by economists and art historians to provide an answer to this question, and to explain how art can endure in a capitalist society that rewards commodification and demands productivity of ideas. The Artist’s Profession If the artist’s profession does indeed have a flooded labor supply, why do so many artists continue to enter? As Menger Columbia Economics Review

puts it, “are artists irresistibly committed to a labor of love, or are they true risklovers, or perhaps ‘rational fools?’” These three possibilities lead to three simplified possible conclusions: artists are inescapably committed to their art regardless of market outcomes, artists overestimate their chances of success, and/or there are non-pecuniary rewards that compensate even the financially unsuccessful artists.1 David Throsby has elaborated upon this third conclusion by providing an alternative model of labor supply specific to artists. He suggests that artists often hold multiple jobs but prefer art-related work to leisure because of the non-pecuniary benefits art offers, so they will tend to substitute away from the non-art work when wages rise in that sector in order to produce more art.2 3 If these models hold, artists are by necessity driven, at least in part, by non-pecuniary rewards rather than money, but “the most important


Spring 2016 constraint on the artist’s time allocation is likely to be a financial one.”4 Artists then hold multiple jobs as a necessity. Throsby cites one survey that found only 24 percent of 3,000 New England artists held no secondary non-arts job.5

artists often hold multiple jobs but prefer art-related work to leisure because of the nonpecuniary benefits art offers While for Throsby this data is an indication of the inadequacy of art as a profession to sustain its laborers, non-arts jobs often have the capacity to fuel artists’ work. Andy Warhol began his career as a commercial artist and hung his paintings in window displays; if not for his fluency in commercial advertising and commodity culture, his non-commercial work, if existent, would have been very different. Similarly, Edward Ruscha began his career as a commercial artist and layout designer; the influences of typography and graphic design are undeniable in his oeuvre. Many professional artists also teach art, which exposes them to the processes and learning curves of young artists and likely influences or strengthens their own art statements. Production of Cultural and Economic Value If an artist were a firm, “how to produce” would be a question of profit maximization. However, if it is assumed that artists are motivated by non-pecuniary rewards, artists in fact have two poles between which they must navigate. Cowen and Tabarrok have framed this situation as a decision between producing high and low art. While the terms of this frame are somewhat problematic from an art historical point of view, it is useful in beginning a dialogue about art production. In the model, it is assumed that “with increasing frequency, popular art is not critically acclaimed and critically acclaimed art is not popular.”6 If popularity is equated with economic value, artists must choose between producing

low art for economic value or high art for cultural value. The relation of high art to low economic value is largely true of art at the moment of its production; take, for example, Leo Steinberg’s position on pop art in 1963: “The question ‘Is it art?’ is regularly asked of pop art, and that’s one of the best things about it... We get used to a certain look, and before long we say, ‘Sure it’s art; it looks like a De Kooning, doesn’t it?’ This is what we might have said five years ago, after growing accustomed to the New York School look. Whereas ten years earlier, an Abstract Expressionist painting, looking quite unlike anything that looked like art, provoked serious doubts as to what it was.”7 Art at its outset is not popular when it is progressive; it is only with time that critically acclaimed art becomes financially successful or popular. (Walter Benjamin also writes, “The conventional is uncritically enjoyed, while the truly new is criticized with aversion”8). Therefore, artists essentially experience an economic penalty for producing what Cowen and Tabarrok call high art, and therefore “artists who maximize utility will not maximize profits-- they will move beyond the point along the [art satisfaction] dimension where profits and artistic vision are in harmony.”9 Art satisfaction acts as the aforementioned reward that incentivizes artists to produce critically acclaimed art.

artists must choose between producing low art for economic value or high art for cultural value. This model is particularly interesting in conversation with Throsby’s concentric circles model of cultural industries, which outlines how core creative industries (those that produce more heavily for cultural value than economic value) influence the ideas and productivity of the next levels of creative industries, and so on, until the least creative industries (producing solely for economic value) have been influenced by the core.10 In this model, the artist plays an essential role as

Columbia Economics Review

9 the producer of cultural and economic value for the entire market economy. While an artist may only see cultural rewards for producing high art, this artist is more beneficial for the market economy in his production of ideas than the artist who sees economic rewards for producing low art. Low art here must be a euphemism for art devoid of constructive ideas – art that will appeal to a mass audience but is not critical or progressive and thus cannot contribute insights to other sectors of the creative industries. Indeed, in any basic macroeconomic model of production, where the inputs are labor, capital, and ideas, because of the diminishing returns to labor and capital and limited ability to infinitely increase labor, ideas become the only viable source of productivity growth. Incentivizing the production of ideas is difficult, however, as they are often non-excludable and thus the creator usually does not financially benefit as much as society.

Low art here must be a euphemism for art devoid of constructive ideas – art that will appeal to a mass audience but is not critical or progressive For Cowen and Tabarrok, the more reproducible the art form, the more likely it is that an artist will have to resort to low art; the reproducibility of the art means that each iteration of the work cannot hold much economic value, and thus the artist must appeal to a broader base in order to benefit.11 Essentially, the reproducible art form is too close to a commodity form to be able to function as high art. From these two models, it seems that artists should be producing individual works with limited reproducibility for cultural value rather than economic value in order to best serve their own interests and the interests of the economy. Production for the sake of economic value is neither rewarding for the artists nor productive for creative industries.


Spring 2016

10 Determinants of Cultural and Economic Worth Before qualifying the work of artists as entirely commercial or entirely cultural, it is important to understand how cultural and economic worth are assigned. What are the roles of the critic, spectator, art institution, and market? In Marcel Duchamp’s Essay The Creative Act, the artist claims that “the creative act is not performed by the artist alone; the spectator brings the work in contact with the external world by deciphering and interpreting its inner qualification and thus adds his contribution to the creative act.”12 Thus some degree of public reception is required for a work to function as art. Furthermore, cultural value can only be attained if the work is received

within the walls of an art institution; reception must occur in an art space where the work can be critically received. It is this second condition that Duchamp famously sought to prod with his readymades, which highlighted the fact that mass-produced utilitarian objects can be considered art in the walls of a gallery even if they are not made by an artist’s hand. In this world, the critic validates the cultural significance of a work, and the art institution inaugurates the artist into the tomes of historical relevancy. In other words, in order to be considered high art, art must either conform to the standards of high art or have enough of an impact to influence those standards. A turn to low culture, then, is a rejection of the validating role of art institutions and critics. Do artists still desire

Columbia Economics Review

both cultural and economic value from their work? Thomas Crow argues that “low culture” is incorporated into high art as an index of relevant culture, a feedback mechanism that allows high art to “displace and estrange the deadening givens of accepted practice.”13 Thus, an alternative source of cultural value can be garnered not from critics but from the masses, where art becomes a tool of political activism and communication. Walter Benjamin writes that reproducible art forms changed the relation of the masses to art, allowing art to have a significant cultural value in allowing “simultaneous collective reception” by mass audiences.14 The role of this art--specifically film--was to instruct the masses how to deal with technological advances. Here the cultural value is connected to mass reception,


Spring 2016 and high art can be made for the masses. But while escaping the art institution could mean embracing the art market or the masses, it could also mean changing the ways in which art is received and garners cultural value. Moving forward from Crow, Greenberg’s framing of minimalism is useful in understanding this shift: “the borderline between art and non-art had to be sought in the three-dimensional, where sculpture was, and where everything that was not art also was.”15 Minimalist work, like readymades, attempted to challenge what could be considered art, testing the critical limits of the art institution. But despite these attempts to challenge cultural valuation, there is still an underlying assumption that it is not the artist who assigns value. Following minimalism, Robert Smithson expressed his frustration with this condition of the artist: “The artist sits in his solitude, knocks out his paintings, assembles them, then waits for someone to confer the value, some external source. The artist isn’t in control of his value.”16 In his land art pieces, Smithson removed art from the gallery and institution entirely by creating art in nature, which removes art from the domain of art historical critique. Smithson is one of many postmodern artists who have begun a refusal of traditional cultural value. If artists refuse cultural value and traditional critique, how do they respond to economic value? Because “the value of art is socially determined,” art markets must rely on critical and popular opinion in order to determine the economic value of a work.17 For art dealers, there are many criteria that factor into the pricing of art:

the critic validates the cultural significance of a work, and the art institution inaugurates the artist into the tomes of historical relevancy “...for new artists the rule is that their work is compared to that of similar artists already introduced in the market,

and based on this, it is then priced low at first; for an artist with a price history, trends are adopted and extrapolated. Moreover, size, medium, and the reputation have an impact. Price decreases are avoided. All this is combined with galleries committing to invest in their artists by organizing art shows and similar events. In this way galleries are able to influence prices, as price increases are anchored in artists’ reputation, sales and times.”18

there are two forms of economic value: value in primary art markets and value in public commodity markets. But while galleries rely on art historical references and public sway, popular opinion of the value of art diverges considerably from primary market and critic opinions. Some producers of art are completely focused on the public opinion aspect of economic value. Peter Lik has made his fortune on photography by producing “mostly panoramic shots of trees, sky, lakes, deserts and blue water in supersaturated colors. Generally speaking, his buyers are not people who acquire the art of Andreas Gursky and Cindy Sherman...” but because they appeal to the masses, the works sell, and the photographer is entirely content to produce prints that--he claims--are financial investments.19 Documenting this diversion of popular and art world opinions of economic value, Vitaly Komar and Alexander Melamid polled Americans (and other citizens of democratic countries) on what they wanted to see in art in their People’s Choice series (1994- 1997), and found that popular opinion (perhaps unsurprisingly) converged on landscape paintings with blue skies. Thus, the economic value of art is not firmly rooted and differs widely between critical and public audiences. It appears there are two forms of economic value: value in primary art markets and value in public commodity markets. One piece of artwork may have significantly

Columbia Economics Review

11 different values across these two markets.

their work is compared to that of similar artists already introduced in the market, and based on this, it is then priced low at first; for an artist with a price history, trends are adopted and extrapolated. Craig Owens argues that the real consequence of these external valuations is that “the artist is estranged from his own production.”20 Cowen and Tabarrok were then asking the wrong question. If attaining cultural and economic value is not at all within the hands of the artist--if the artist is indeed outside of this determination, or if artists refuse the concept of cultural evaluation and must wait for economic value--then on what terms do artists produce? Artist as Brand Some artists have turned to mass production and commodity forms, echoing (and in some cases incorporating) mass culture. Here it is relevant again to discuss the career of Andy Warhol, who famously wrote in his Philosophy of Andy Warhol, “making money is art and working is art and good business is the best art.”21 Some artists have adopted capitalist practices of art production and are able to sell their reputations, allowing them to use their reputations to sell their work. Their personas and work have become commercial items. Harold Rosenberg writes in his essay in The American Action Painters, “What is a painting that is not... whatever else a painting has ever been... [but] the painter himself changed into a ghost inhabiting The Art World[?] Here the common phrase, ‘I have bought an O—’ (rather than a painting by O—) becomes literally true. The man who started to remake him-


Spring 2016

12 self has made himself into a commodity with a trademark.”22

The man who started to remake himself has made himself into a commodity with a trademark The commodification and branding of an artist and their work may not be an exclusively modern phenomenon, but the rise of mass culture and awareness of institutional power (to frame art and artists) have certainly led to an increased interest in self-branding and mass production. One of the most important figures in the artist-as-brand phenomenon is Takashi Murakami, whose 2008 Murakami exhibition explicitly denoted the merging of high art with commodity culture: “Successful artists have become recognizable celebrities and, arguably, trademarks and corporate entities,” wrote a critic of the show.23 Almost as an afterthought in the last line of his review of another Takashi Murakami show, Artforum editor David Rimanelli writes, “Murakami happily turns out stuffed animals, watches, and so on. I bought two T-shirts at Bard; I really like them.” The production of inexpensive commodities--that aim to be nothing more than commodities—is a crucial aspect of Murakami’s process. It is a gesture of explicit economic desire, satisfying a mass appeal to own anything by Murakami. A further iteration of this art(ist) commodification is obvious in Murakami’s collaboration with Louis Vuitton to create handbags featuring his designs; the handbags became wildly popular, and Murakami in turn used the LV design on his own “high art” canvases.24 Through his commodity production and exchanges with mass culture, Murakami (as a brand) accrues economic interest, which in turn feeds both the cultural value of the work and his un-ironic mimicry of mass cultural production. The artist has taken control of his own cultural value by producing for economic value, albeit in a very different way than Smithson.25 Rosenberg states at the end of his essay on modern American art, “American

vanguard art needs a genuine audience-not just a market. It needs understanding--not just publicity.”26 It seems that the brand of the artist and commodification of his works (the market for art) are increasingly the only things driving the artist, rather than intellectual advances and public and private critical reception.

tion were all at some point considered the language in which to speak, or the language that speaks for the actor, I suggest that the new language replacing the author is that of the commodity. The market structure is the new language through which artists much speak (produce).

The Commodity Language But while it is easy to read art commodification and artist branding as a submission of high art to mass culture and monetary gain, if what is driving artists is still non-pecuniary art satisfaction, there must be an alternative explanation. In an article on contemporary youth expression, Rob Walker writes, “Many of [these youth] clearly see what they are doing as not only noncorporate but also somehow anticorporate: making statements against the materialistic mainstream - but doing it with different forms of materialism. In other words, they see products and brands as viable forms of creative expression.”27 Contributing an alternative commodity to the market, in other words, is a productive way to subvert the market if the alternative commodity is distinguishable. It may be that, in the process of shifting of creative responsibility from artist to spectator to institution, another level has been reached.

The market structure is the new language through which artists much speak

Contributing an alternative commodity to the market, in other words, is a productive way to subvert the market if the alternative commodity is distinguishable In the event of the death of the author, Barthes wrote, “it is language which speaks, not the author; to write is through a prerequisite impersonality... to reach that point where only language acts, ‘performs,’ and not ‘me.’”28 While language, representation, and abstracColumbia Economics Review

Perhaps Andy Warhol was the first to explicitly embrace this framework by opening factories in which he relinquished authorship over his work, asking assistants to perform the tasks of silkscreening and production. But before Warhol, there were other indications of art moving in the direction of commodities. Minimalist artists’ desires for the future of painting to be devoid of medium-specific boundaries, for the works to necessitate viewer interaction, and for “Specific Objects” to test the boundaries of what could be considered art were already moving towards a position of art as an art-object-turned-commodity. Hal Foster wrote “in the moralistic charge that minimalism was reductive lay the critical perception that it pushed art toward the quotidian, the utilitarian, the non-artistic.” Critically, minimalism also pushed toward a public space and was “produced in a physical interface with the actual world.”29 Later, in 1967, Daniel Buren wrote that “’the artist is hailed as art’s greatest glory; it is time for him to step down from this role he has been cast in or too willingly played, so that the ‘work’ itself may become visible, no longer blurred by the myth of the ‘creator,’ a man ‘above the run of the mill.’”30 Is not the ultimate relinquishing of the creator title in mass production of commodity goods with which the public can interact? While luxury products are a notable exception, most commodities do not have a single name attached to their production; in this economy, true death of the author can be achieved. In creating a brand, artists may not be selfishly (or narcissistically) propagating their name,


Spring 2016 but instead allowing the product and work to take precedent over their identity; by speaking the commodity language, they are able to achieve what previous art forms could not. Creating commodities is a form of creative expression that simultaneously disavows the role of the creator and gives the creator control over value. Because commodities are outside the realm of art institutions, the artist no longer relies on external sources to evaluate their work (of course, the market now replaces the critic, but the market may be more easily swayed and certainly includes a wider range of consumer tastes). This, of course, contradicts the conclusion of Cowen and Tabarrok that the reproducible art form is too close to a commodity form to be able to function as high art--Warhol’s silkscreens and Murakami’s sculptures are unquestionably vital pieces of art history, and they function as both commodity and high art.

In creating a brand, artists may not be selfishly (or narcissistically) propagating their name, but instead allowing the product and word to take precedent over thei identity Economics of Culture If art is moving towards a more economic model of production, what will become of art’s position in developing culture? David Throsby seems to suggest that it will in fact give art a stronger position within culture: “Cultures may differ, but their evolution will be determined not by the ideas that they embody but by their success in dealing with the challenges of the material world in which they are situated. Such ‘cultural materialism’ has a clear counterpart in economics, especially in the ‘old’ school of institutional economics, where culture underpins all economic activity.” 31

Thus, with the development of an economic system, culture (and therefore art) must by necessity involve itself with an economic framework to survive. While this could be read with a pessimistic edge, the integration of cultural and economic models may also provide benefits within both models.

Such ‘cultural materialism’ has a clear counterpart in economics, especially in the ‘old’ school of institutional economics From an economic standpoint, viewing artworks as commodities with individual values and artists as producers of commodities is useful in analyzing how the art market interacts with other markets and how artists function within the economy (such as producers of A, ideas, within a concentric cultural industries model or macroeconomic model). From a cultural standpoint, “artists working alone are generally doing so in the expectation that their work will communicate with others; similarly, lone consumers of the arts are likely to be making some wider human connection.”32 The more works of art become commodities, the easier it will be for artists to reach other artists and mass audiences. If consumers are more able to share the experience of the artwork in collective reception or ownership, the mission of spreading cultural interest and ideas will be facilitated. Does thinking about art in commodity terms undermine the aesthetic, political, and social power of art? There is evidence to support that mass-produced commodity art and technological reproduction may in fact facilitate political and social change. Walter Benjamin argued that technological reproduction eludes the concept of originality but can be useful in that it “can place the copy of the original in situations which the original itself cannot attain.”33 This process is clear in the works of successful (and critically acclaimed) artists across the twentieth century: John Heartfield’s work circulated on the covers of workers’ magaColumbia Economics Review

13 zines (A.I.Z. (Workers’ Illustrated Magazine), 1929–34), Barbara Kruger’s work was placed on billboards and traditional advertising spaces (Untitled (We don’t need another hero), 1986), and Dan Graham’s imitated a magazine article, intended for print in a non-arts magazine (Homes for America, 1965). Commodity status gives art the opportunity to communicate with wider audiences, thus activating its revolutionary potential. Commodification of art objects may also increase the ability of artists to work in art alone. If each artist essentially “becomes” their own brand, and if there is an equivalent of brand loyalty so that demand for each artist’s brand is relatively inelastic, each artist should be able to sell their own works as long as their work has market appeal. The problem of the winner-take-all market (grossly excess supply) would be solved with additional demand for the commodities produced by previously non-commodity-focused artists. The aforementioned dual markets through which economic value is assigned (popular appeal markets and primary art markets) would essentially become one, where every agent is a consumer. Funding for artists would no longer be so controversial – routed through subsidies, grants and government sponsorships – because the public would become more responsible for their economic support.

Commodity status gives art the opportunity to communicate with wider audiences Is there a place for the avant-garde in this world of art commodities? If, as Henry Geldzahler suggested during the emergence of pop art, avant-garde is defined in terms of audience (or rather lack thereof), where the artist is a subversive and alienated figure, “the new situation is different. People do buy art. In this sense too there is no longer, or at least not at the moment, such a thing as an avant-garde.”34 If the avant-garde is defined in terms of its relation to high and low art, where it acts as a “research and development arm of the culture industry” that is economically fueled and artistically obedient to the elite,


Spring 2016

14 this type of practice too will be overcome once artists gain control over their practices and economic valuations.35 The avant- garde may be relegated to the realm of subversive brand culture. Conclusions In framing the artist as a producer of work that is created to some degree for cultural or economic value, high art is equated with initially low economic value but high critical acclaim, and low art is equated with low critical acclaim but potentially high economic value. Economic analysis is helpful in establishing that most artists do not in fact produce solely for the sake of monetary rewards. Upon further investigation of the routes to cultural and economic valuation, however, it is clear that the artist is removed from determining the value of his own work and increasingly rejects the notion that the art institution should determine the cultural value of art. The artist must therefore be producing for the sake of art satisfaction, removed from external valuation.

commodification of art may be lamentable to art critics, it is for the artist a way of regaining control over the value of their work and subverting the art In response to commodity culture and the perceived death of the author, the birth of the artist as brand and commodity producer is apparent. Commodity culture becomes the new language through which an agent must speak. While the commodification of art may be lamentable to art critics, it is for the artist a way of regaining control over the value of their work and subverting the art institution. Mass production and distribution of commodities gives

artwork the capability to affect sociopolitical change while putting forth the art object (rather than artist) as the highest achievement. Further investigation should include an exploration of the extent to which it is true that popular, low art accrues greater economic value than high, critically acclaimed art, given the recent surge in contemporary art prices. n Citations 1 Menger, Pierre-Michel. “Artistic Labor Markets: Contingent Work, Excess Supply and 2 Throsby, David. “A Work-Preference Model of Artist Behaviour.” In Cultural Economics and Cultural Policies, 69–80. Springer, 1994. 3 Cowen, Tyler, and Alexander Tabarrok. “An Economic Theory of AvantGarde and Popular Art, or High and Low Culture.” Southern Economic Journal 67, no. 2 (2000): 232–53. 4 Throsby, David. “Economic Analysis of Artists’ Behaviour: Some Current issues/Le Comportement Économique Des Artistes  : De Nouvelles Questions.” Revue d’Économie Politique, January 2010, 47–56. 5 Throsby, “A Work-Preference Model of Artist Behaviour,” 73. 6 Cowen, An Economic Theory of Avant-Garde and Popular Art, or High and Low Culture, 233. 7 Selz, Peter et al., “A Symposium on Pop Art” (1963), in Pop Art: A Critical History, ed. Steven Henry Madoff (University of California Press, 1997), 103-117. 8 Walter Benjamin, “The Work of Art in the Age of Its Technological Reproducibility: Second Version,” in Walter Benjamin: Selected Writings / Vol. 3 1935-1938, Michael Jennings, ed. (Harvard University Press, 2002), 110. 9 Cowen, An Economic Theory of Avant-Garde and Popular Art, or High and Low Culture, 236. 10 Throsby, David. “The Concentric Circles Model of the Cultural Industries.” Cultural Trends 17, no. 3 (September 2008): 147–64. 11 Cowen, An Economic Theory of Avant-Garde and Popular Art, or High and Low Culture, 240. 12 Duchamp, Marcel, Marc Dachy, Richard Hamilton, George Heard Hamilton, and Jean-LucFafchamps. The Creative Act. Sub Rosa, 1994, 2. 13 Crow, Thomas, “Modernism and Mass Culture in the Visual Arts,” Mod-

Columbia Economics Review

ern Art in the Common Culture (Yale University Press, 1996), 3-36. 14 Benjamin, Walter, “The Work of Art in the Age of Its Technological Reproducibility: Second Version,” 103-105. 15 Fried, Michael, “Art and Objecthood” (1967), in Art and Objecthood: Essays and Reviews (University of Chicago Press, 1998), 152. 16 Owens, Craig “From Work to Frame,” in Beyond Recognition (University of California Press, 1992), 122. 17 Schönfeld, Susanne, and Andreas Reinstaller. “The Effects of Gallery and Artist Reputation on Prices in the Primary Market for Art: A Note.” Journal of Cultural Economics 31, no. 2 (2007): 144. 18 Ibid. 19 Segal, David. “Peter Lik’s Recipe for Success: Sell Prints. Print Money.” The New York Times, February 21, 2015, sec. Business Day. 20 Owens, “From Work to Frame,” 122. 21 Warhol, Andy. 1975. The philosophy of Andy Warhol: from A to B and back again. New York: Harcourt Brace Jovanovich. 22 Rosenberg, Harold, “The American Action Painters” (1952), in The Tradition of the New (McGraw-Hill, 1959), 37. 23 Siegel, K. “Takashi Murakami.” Artforum International, June 2003. 24 Ludlow, Arthur. “The Murakami Method.” New York Times Magazine, April 3, 2005. 25 Rimanelli, David. “TAKASHI MURAKAMI.” Artforum International 38.3 (1999): 135. Biography in Context. Web. 9 May 2015. 26 Rosenberg, “The American Action Painters,” 38-39. 27 Walker, Rob. “The Brand Underground.” New York Times Magazine, July 30, 2006. 28 Owens, “From Work to Frame,” 125. 29 Foster, Hal. The Return of the Real: The Avant-Garde at the End of the Century. Cambridge MA: The MIT Press, 1996. 30 Owens, “From Work to Frame,” 130. 31 Throsby, David. Economics and Culture. Cambridge: Cambridge University Press, 2001, 10. 32 Throsby, Economics and Culture, 14. 33 Benjamin, “The Work of Art in the Age of Its Technological Reproducibility: Second Version,” 103. 34 Selz, “A Symposium on PopArt,” 37. 35 Crow, “Modernism and Mass Culture in the Visual Arts,” 35.


Spring 2016

15

Cash for the Cure An Analysis of What Impacts NCI Funding and the Significance of That Effect Zachary Neugut Columbia University The field of healthcare is constantly being taken to new horizons with the help of research, funded from different organizations, but as revealed in this paper, the value and attention the everyday consumer with an internet connection ascribes to a particular cancer is positively correlated to the funding for research the disease may receive. The author points to some valid drawbacks in the research, such as the lack of data for rare cancers, the presence of certain omitted variables, and the potential reverse causation between funding and Google searches. Although the author is unable to establish a true causation between Google Searches and NCI or ACS funding, we find that a correlation does exist, implying that worldwide Internet trends, or at least those that are related to the various types of cancer, may not be as short lasting as we think them to be. Understanding the impact of an everyday Google search through the lens of economics and healthcare makes this paper a particularly interesting read. - S.T.

Introduction The U.S. National Cancer Institute (NCI) is the foremost institution for cancer research in the world. Not only does the NCI directly impact cancer research through the $5 billion it awards in grants each year, but it also impacts non-profit and pharmaceutical cancer research as well. There have been few studies that have analyzed what factors impact how the NCI allocates its money. This paper suggests that the two main factors associated with NCI funding are the amount of cases in the cancer site (type of cancer) as well as the Google searches for the disease, which serves as a surrogate measure for public interest. However, while this paper investigates whether the NCI affects non-profit funding and finds that there is a direct relationship, it is uncertain whether the NCI is causing an in-

crease in non-profit funding or whether the association between the two is the result of confounding factors (i.e., both are affected by the same outside factors).

the two main factors associated with NCI funding are the amount of cases in the cancer site as well as the Google searches

Columbia Economics Review

Background NIH and NCI The National Institutes of Health (NIH) now exists as the primary agency for medical research in the U.S., and indeed in the world. Yet, the direction and mandate of the NIH has evolved over time through legislation. The Ransdell Act, which created the NIH in 1930, empowered it with the mandate to establish fellowships for the research of “basic biological and medical problems.� The 1944 Public Health Service Act gave the NIH the ability to both fund research through a research grants funding program, as well as to perform clinical trials in-house at the NIH. Currently, approximately 80% of NIH funds are used to fund 50,000 competitive grants to over 2,500 research institutions, while only about 10% of its budget


Spring 2016

16 is used for research in intramural NIH laboratories. This is quite a significant dollar amount; whereas originally the NIH only had a $750,000 budget (which included building their headquarters), its budget has ballooned over the years to approximately $32 billion annually. The structure of the NIH has changed over time as well; the NIH now encompasses 27 different institutes and centers. Each of these divisions has its own budget and focuses on a different area of scientific research. (NIH Website History Section, 4) In 1944, the National Cancer Institute (NCI) was created as a division within the NIH. The NCI was mandated to provide “research and training needs for the cause, diagnosis, and treatment of cancer.” The NCI remained relatively unchanged until 1971, when President Nixon signed the National Cancer Act of 1971 in his campaign to win the “War on Cancer.” This act increased the NCI’s budget and also expanded the NCI director’s powers. For example, it gave the NCI director the ability to submit an annual budget, called the Professional Judgment Budget (colloquially called the Bypass Budget), directly to the President, bypassing the approval of the NIH director. The NCI has the largest budget of the 27 institutes under the NIH and is the only one that budgets in this way making comparisons of funding for cancer and other diseases difficult at best (NCI Website About NCI section).

The NIH has been quite successful in its research over the years, although it is extremely difficult to quantify that success The NIH has been quite successful in its research over the years, although it is extremely difficult to quantify that success. Five Nobel prizes have been awarded to researchers in the NIH intramural programs; more than 150 drugs, vaccines, or new uses of existing drugs owe varying degrees of their success to the NIH; and

there have been countless other medical successes as well. Despite all these successes of the NIH, when it comes to cancer, the NCI has not been nearly as successful, as the death rate for cancer (adjusted for age) has fallen only 5% from 1950 to 2009 (NIH Website About NIH section). Grant Proposal System The NIH is a multi-faceted organization and does not have a single mission. The NIH lists itself as having several goals, one of which is to “foster fundamental creative discoveries, innovative research strategies, and their applications as a basis for ultimately protecting and improving health.” As mentioned above, the NIH primarily uses its budget to achieve these goals through grant applications. The NIH employs various types of grants in order to award money to scientists. The most common research grant (R series) is an R01, which is awarded for a “discrete, specified, circumscribed research project,” yet other types of grants also exist with different parameters for funding (NIH Website Grants section). According to Harold Varmus, the former Director of the NIH (as well as the former director of the NCI), the NIH uses five criteria to assess how to allocate its resources. They are as follows (HHS Testimony): 1. Employ a peer review process. 2. Pursue “opportunities that offer the best prospects for new knowledge and for improving the prevention and treatment of disease.” 3. Maintain a diverse research portfolio across all types of research. 4. Address diseases based on the “burden of disease” which includes incidence, prevalence, mortality, and morbidity, among other criteria. 5. Maintain infrastructure to conduct research. These five points are all integral to determining how to allocate money on a grant-by-grant basis. However, in terms of deciding how much money to allocate towards research of different types of cancers, the first criteria of employing a peer review process is not so relevant since all cancer sites use the same grant process. Additionally, since all cancers need to maintain infrastructure, the fifth point is also irrelevant in analyzing how the NCI decides to allocate money towards different cancers. Instead, based on Varmus’s statement the main com-

Columbia Economics Review

ponents should be funding diseases that have higher burdens of disease and more potential for research, as well as making sure to diversify the portfolio of disease research. The NCI also uses specific criteria to assess its grant proposals. They analyze grants for factors such as “scientific merit, potential impact, likelihood of success… public health significance, scientific novelty, and overall representation of the research topic within the NCI portfolio.” Thus, when determining how much money to allocate to each disease, the NCI seems to utilize the same approach as the overall NIH of investing in diseases with greater health significance as well as potential for research, with some sort of thought also given to the overall representation of the NCI portfolio. Additionally, the NCI claims to not use “predetermined targets for a specific disease area or research category,” though it allocates very consistent amounts of money to research for different types of cancer every year (NCI Website About NCI section).

The NCI claims to not use “predetermined targets for a specific disease area or research category,” though it allocates very consistent amounts of money to research for different types of cancer every year Other Cancer Research Although the NCI is the main vehicle the government utilizes to finance cancer research, there are other government institutions that also conduct cancer research. The Centers for Disease Control (CDC) receives money to “develop, implement, and promote effective cancer prevention and control practices.” The Department of Defense (DOD) also re-


Spring 2016 ceives money to research specific types of cancer. For example, the DOD has allocated $120 million towards its Breast Cancer Research Program (BCRP) and $10.5 million towards its Lung Cancer Research Program (IASLC website). This money can be quite significant, as the BCRP budget is approximately 20% of the NCI’s research funding of breast cancer. There are other government institutions that also fund cancer research, though typically in much smaller amounts. For example, the state of California spends approximately $10 million annually for cancer research ( “How much Money Is Spent on Cancer Research”). Charitable donations also account for large amounts of the money spent on cancer research, as over $1 billion dollars each year is spent on cancer research that was raised from private donations. For certain cancers, this can be extremely substantial as there are approximately $3 billion of private donations to breast cancer charities per year (though not all this money goes towards research), and the largest three breast cancer charities spent $125 million in 2012 on research, which was more than the DOD BCRP budget (Breast Cancer Consortium ). Pharmaceutical companies also spend large sums of money on cancer research as well. Although it is hard to find a recent estimate, pharmaceutical companies spent an estimated $1.4 billion on

cancer research in 1997. Pharmaceutical companies are still investing significant sums into cancer research, as there are currently nearly 1,000 cancer drugs in development (Pharma Times article entitled “Nearly 1000 Cancer Drugs In Development in U.S.”).

Pharmaceutical companies are still investing significant sums into cancer research NIH and NCI Controversy Virtually every governmental funding program receives some criticism that funding is not being done in an efficient manner, and the NIH is no exception to this rule. While a complete history of complaints of lack of funding might necessitate an entire paper, perhaps one of the most notable controversies was the 1980’s AIDS epidemic, when claims were made that the NIH did not fund AIDS research

Columbia Economics Review

17 fast enough. While the details of this controversy are also beyond the scope of this paper, what is interesting is that one of its effects was that it led to AIDS activist groups trying to increase funding for AIDS research by being extremely vocal in terms of demanding more funding. This strategy has subsequently been emulated by many other disease advocacy groups, most notably with regard to breast cancer research. Overall, this caused funding allocations to become more influenced by advocacy groups. Professor Sarah Fox of UCLA critiqued this, saying that, “Since we are a crisis-oriented society, the people who make the most noise get the most publicity. Interest groups do count as opposed to data and rationality” (NIH website History section). A case in point was the allocation by Congress of $7 million in 1994 and in years subsequently to study elevated breast cancer rates in the northeastern United States. This was precipitated by a perceived high rate of breast cancer on Long Island, NY, with the formation of vociferous advocacy groups that obtained the attention of Sen. Alfonse D’Amato, the senator from New York at the time and a resident of Long Island. He sponsored the special funding for this research and became a champion of the breast cancer advocacy community. Over the years, there has also been criticism about how the NCI allocates its budget. A 2013 article in The Atlantic crit-


Spring 2016

18 icized the NCI for not spending enough money on research regarding pediatric cancer (“Our Disproportionate Focus on Adult Over Pediatric Cancer Research”). Joe McDonough, the founder of the B+ Foundation, was quoted in that article as saying that the government does not focus enough on pediatric cancers because they are rare, and instead should factor in the number of years of potential life that were lost. Additionally, there has also been a critique that “Research that does get funding is less risky…because the NCI doesn’t want to spend money on projects that don’t have a sure outcome… which means sometimes investigators are less likely to take high-risk projects… but high-risk projects could have good results.”

the government does not focus enough on pediatric cancers because they are rare, and instead should factor in the number of years of potential life that were lost Policy Question It is quite apparent that a significant amount of money is being spent on cancer research outside of NCI funding. Yet, there is arguably no organization more important in terms of cancer research than the NCI. First, the NCI’s budget is much larger than any other individual American institution’s cancer research budget. Additionally, as discussed above, the NIH itself focuses on research dealing with “basic biological and medical problems” (NIH website History section). Because of this focus on basic research, NCI research is often the backbone that spurs further research by other institutions, oftentimes private pharmaceutical companies. Thus, “whether an idea originates in a university laboratory or starts with basic product research carried out in the private sector, important

findings percolate through the entire scientific community” (NIH website History section). Therefore, the distribution of NCI funding clearly has a significant impact on the overall cancer research allocations. The fact that the NCI has such a fundamental role in shaping the direction of future cancer research is of key interest to various stakeholders. While the general public should want what benefits society the most and, therefore, should want an allocation that increases the overall expected health of society the most, other stakeholders, such as patient advocacy groups, researchers, doctors, patients, pharmaceutical companies, and NCI employees, might have other motivations besides general public health (e.g. a relative of someone who had a specific cancer may want more funding to go towards the disease for sentimental reasons, or a pharmaceutical company may want more funding to go towards whatever research would lead to the most expected profit for the company, etc.) that could cause them to steer the budget away from what is in the best interests of the general public. Determining the ideal asset allocation, according to the interests of the general public, is beyond the scope of this paper. Instead, this paper analyzes what factors contribute towards NCI allocations to different cancer sites, as well as the magnitude of these various factors. Furthermore, this paper also analyzes whether the NCI allocation has an effect on non-profit cancer research. Data NIH and Non-Profit Funding In order to assess the impact of factors upon NCI funding, the first data that was necessary to collect was levels of NCI funding for various diseases. Fortunately, the NIH website has a section of Funding for Various Research, Condition, and Disease Categories (RCDC). This report detailed the levels of funding for various diseases (recorded in millions of dollars) across the years 2011 to 2016. Because other variables could not be calculated for the year 2016, data from 2016 was excluded from the analysis. Additionally, data from the year 2015 is an estimate, though it should be a very accurate estimate. The cancers included in my dataset were brain cancer, breast cancer, cervical cancer, colorectal cancer, Hodgkin’s disease, liver cancer, lung

Columbia Economics Review

cancer, lymphoma, ovarian cancer, pancreatic cancer, prostate cancer, and uterine cancer. The only cancer not included in the NIH dataset providing estimates of funding for is neuroblastoma, which was excluded because this cancer is extremely rare and therefore did not have accurate data for certain measures of its burden of disease. It is important to note a few things. First, all the cancer sites being analyzed are far from rare, with the minimum cases for any cancer being approximately 9,000 per year, which makes it impossible to extrapolate any conclusions that arise to orphan-diseases. Second, the NIH does not expressly budget by category, so these numbers are neither mutually exclusive nor exact. Third, Hodgkin’s disease is a subset of Lymphoma, therefore Hodgkin’s disease might be unduly influence the data-set as it would be double counted. In order to obtain an estimate for nonprofit research, I analyzed the budget for the American Cancer Society (ACS) from 2011 to 2014. The ACS is the single largest private foundation funder of cancer research in the U.S. and the largest funder after the NCI itself. This information was recorded in dollars and information existed for all cancers that the NIH accounted for.

NIH funding was unduly affected by prevalence and research potential rather than potential years lost Determining which Covariates to Include After choosing these dependent variables, it was important to determine which potentially relevant factors influencing how the NCI allocates money (methodology for obtaining this data outlined below) would be worth examining. As noted above, Harold Varmus, the former director of the NIH, said that the burden of disease was a factor in determining budget allocations; therefore we included data on cases and deaths per disease. Additionally, The Atlantic claimed that NIH funding was unduly


Spring 2016 affected by prevalence and research potential rather than potential years lost, so we included data on 5-year survival rates, median ages, and the presence of childhood cancer in order to test if this claim was true. Last, in order to test whether overall public interest in a disease affects funding, we obtained data on Google searches to serve as a surrogate measure for public interest. Burden of Disease The data that measured the burden of disease was acquired from www.Cancer. org’s “Cancer Facts & Figures” annual reports. These reports provided data on the estimated numbers of new cases and deaths. The www.Cancer.org “Cancer Facts & Figures” annual reports also provided data on which cancers affect children. Only brain cancer, lymphoma, and Hodgkin’s disease have a significant number of childhood cancers in the dataset. Therefore, we assigned a dummy variable for whether the cancer affected children or not. The Cancer.org “Cancer Facts & Figures” annual reports also provided data on 5-year survival rates by cancer site. This data was measured on a 5 year time lag (i.e., data from the 2013 report was the fiveyear survival rate from 2002-2008). However, there was no data for brain cancer, Hodgkin’s disease, and lymphoma. I obtained data for those from the SEER (Surveillance, Epidemiology, and End Results Program) Cancer Statistics Review, which was where the “Cancer Facts & Figures” annual reports obtained their data. However, the SEER Cancer Statistics Review only had data for the year 2015. Therefore, we assumed that the 5-year survival rate was constant over the years 2011-2015 for brain cancer, Hodgkin’s disease, and lymphoma. While this is obviously not ideal, the 5-year survival rates rarely changed over time for the other cancer sites and the data measured in each report was the survival rate from a few years beforehand, so it seemed like this should have a negligible effect on the validity of the analysis. We also obtained data on the median age of cancer patients at diagnosis by primary cancer site from the SEER Cancer Statistics Review. However, this data was recorded over the years 2007-2011 and, therefore, we kept the median age of cancer constant over the dataset. In brief, none of the data changes substantially over time. Therefore, it is not of interest in determining how change of a factor over time influences a change in NIH funding, as there are few non-negligible

changes. Instead, this paper mainly seeks to analyze how these factors affect NIH funding in a given year. Because of this, while the median age and 5-year survival rates might be slightly off, this should not affect the analyses contained in this paper. Public Interest Last, in order to try to gauge public interest in each cancer, I acquired the number of Google searches for different types of cancer. Google Trends provides data for “how often a term is searched for relative to the total number of searches, globally” (Phonofile website). Additionally, breast cancer searches were always the highest number of searches in a year. Because of this, each Google Trend score is calculated as the average number of searches for a given disease as a percentage of the highest day of breast cancer searches in a given year. While this number might be hard to directly interpret, it still gives a good measure of the relative amount of searches. For example, Google Trend scores of 20 and 5 for two cancers in a given year indicate that the former received 4 times as many Google searches as the latter.

breast cancer searches were always the highest number of searches in a year In brief, I compared each search of the type of cancer followed by the word “Cancer”. However, I used “Colon Cancer” for colorectal cancer, “Lymphoma” for lymphomas (as opposed to “Lymphoma Cancer”), “Hodgkin’s Disease” (as opposed to “Hodgkin’s Lymphoma” or “Hodgkin’s Cancer”), and uterine cancer as “Endometrial Cancer”.

19 and ACS funding. Third, we need a model that tests whether NCI funding influences ACS funding. Also, note that because ACS funding does not include the year 2015, any model with ACS will only be using data from 2011-2014. Ordinarily, a potential way to select a model would be to create a full model using all potential covariates, and then eliminate any non-significant covariates (and potentially iterate this elimination process) until all the covariates left in the model are significant. However, the data we have poses a significant problem to undertaking this process. Our data contain 60 observations that are comprised of 12 different cancers over a five year span. Because there is little variance in each cancer over the 5 year span, to a certain extent the model should function similarly to a model with only 12 data points (that are recorded 5 times) as opposed to 60 data points. If we were to fit a full model with all 6 covariates and 60 observations, then there would not be a problem of overfitting. However, since the dataset is functioning similarly to a dataset with only 12 observations, there are problems with overfitting that arise. Overfitting occurs when the model has too many parameters compared to the amount of observations, and fits the data to white noise that will be non-predictive rather than making models that have any predictive quality.

since the dataset is functioning similarly to a dataset with only 12 observations, there are problems with overfitting that arise

Methods Overview of Model Selection Process In order to test different factors to answer our policy question, three models are necessary. The first is a model that establishes which factors affect NCI funding. Second, we need a model that establishes which factors affect ACS funding, which will allow us to see if the same factors affect NCI

Columbia Economics Review

In an extreme case, a model with the same amount of parameters as observations can artificially create a perfect fit, even if there is no relationship between any of the parameters and the dependent variable. In fact, we created a model with all 6 covariates and NCI funding (see Appendix: Model 1) and found statistical significance (which will be defined in this paper as any relationship that has a p value


Spring 2016

20 less than 0.05) with every covariates besides median age (which has a p value of 0.07 which is quite close to the cutoff). Therefore, this model seems to have problems with overfitting due to the high amount of parameters relative to the effective amount of observations. Because of the problems presented with the process above, we developed the model with a different approach. We first analyzed which non-dummy covariates correlate with NCI funding in a simple linear regression (we did not include dummy variables since dummy variables are very unlikely to correlate without controlling for other factors). We then built a full model with only covariates that have statistically significant relationships with NCI funding in a simple linear regression. We then eliminated any non-statistically significant variables from the model and tested whether to include the child cancer dummy variable as well. Once we had this final model, we repeated this process, only this time factoring in time effects. After we did this, we repeated this entire process, only this time analyzing how these covariates affect ACS funding. The last thing we did was to develop a model testing whether NCI funding significantly affects ACS funding, when controlling for other factors that affect ACS funding. NCI Model Selection The first step we took was to analyze each of the 5 non-binary covariates to see if there were any statistically significant relationships. The results of that analysis are below: Variable P Value Cases 0.0016 Deaths 0.0528 5-year Survival 0.0493 Age 0.0418 Google 0.0000 We then created a model using the covariates that had a significant relationship with NCI funding. (See Appendix: Model 2) Only cases and Google searches had significance, so we then created a new model (See Appendix: Model 3) using those covariates as well as checking if the dummy variable of whether it is a cancer that affects children was significant. The dummy variable for child cancer did not prove significant, so we were left with a model (See Appendix: Model 4) that had cases and Google searches as

its two covariates. This model seems like a good fit with an R-squared of .76 and an F test p value of 0.0000. We then repeated our analysis, only this time factoring in “Time Effects” because the data was panel data. We first calculated the P value of every non-dummy covariate factoring in time effects, which is displayed below: Variable Cases Deaths 5-year Survival Age Google

P Value 0.0000 0.0616 0.0578 0.0496 0.0000

We then analyzed the significant variables in a regression model (See Appendix: Model 5), and again found only cases and Google searches statistically significant. We tested to see whether the dummy variable of it being a childhood cancer was significant (See Appendix: Model 6), but it proved not to be, leaving us with a model using cases and Google searches to predict NCI funding while factoring in fixed time effects (see Appendix: Model 7).

A regular regression model with time effects operates under the assumption that the errors are homoscedastic, meaning that the variance is constant across all A regular regression model with time effects operates under the assumption that the errors are homoscedastic, meaning that the variance is constant across all variables. However, this might not be the case. We therefore tested whether this fixed time effects model was significant after factoring in heteroskedastic errors (See Appendix: Model 8), and both vari-

Columbia Economics Review

ables remained significant while using more robust errors. ACS Model Selection We then sought to see what factors affect ACS funding. We analyzed each of the 5 non-binary covariates to see if there was any statistical relationship. The results of that analysis are below: Variable Cases Deaths 5-year Survival Age Google

P Value 0.0000 0.0141 0.1307 0.0502 0.0000

We then created a model using the covariates that had a significant relationship with ACS funding (see Appendix: Model 9). Only cases and Google searches had significance in the new model, so we then created a model (See Appendix: Model 10) using those covariates as well as checking if the dummy variable of whether it is a cancer that affects children was significant. The dummy variable of child cancer did not prove significant, so we were left with a model (see Appendix: Model 11) that had cases and google searches as its two covariates. This model seems like a good fit with an R-squared of .76 and an F test p value of 0. We then repeated our analysis on ACS funding, only this time factoring in “Time Effects” because the data was panel data. We first calculated the P value of every non-dummy covariate factoring in time effects, which is displayed below: Variable Cases Deaths 5-year Survival Age Google

P Value 0.0000 0.0174 0.1430 0.0579 0.0000

We then analyzed the significant variables in a regression model (See Appendix: Model 12), and again found only cases and Google searches statistically significant. We tested to see whether the dummy variable for childhood cancer was significant (See Appendix: Model 13), but it proved not to be, leaving us with a model using cases and Google searches to predict ACS funding while factoring in fixed time effects (see Appendix: Model 14). We again tested


Spring 2016 whether this fixed time effects model was significant once factoring in heteroskedastic errors (See Appendix: Model 15), and both variables remained significant. Since we want to also analyze how NCI affects ACS funding, we first build a model testing the relationship using NCI as the independent variable and ACS funding as the dependent variable (see Appendix: Model 16). We then also took the best model for predicting ACS funding (see Appendix: Model 15), which was a model with cases and Google searches factoring in time effects, and added NCI funding to see how significant NCI funding is in regards to ACS funding (see Appendix: Model 17). Results Overall, cases and Google searches seem like good predictors of NCI funding (See Appendix: Model 8). The R-squared value is .77, which is very high for only 2 factors predicting funding. Additionally, according to an ANOVA analysis (See Appendix: ANOVA of Model 8), cases was able to predict about twice the variance that Google searches could, yet both remained integral predictors of NCI funding. Below is a graph of cases, Google searches, and NCI funding. The plane is the regression plane we modeled before that factored in time effects (see Appendix: Model 8). Red lines connote positive residuals, which mean that the observed NCI funding is more than the expected NCI funding

(with blue lines being negative residuals). Overall, we can see that these two variables are very good predictors of NCI funding. Additionally, when analyzing ACS funding, cases and Google searches were also good predictors of ACS funding (see Appendix: Model 8). The R-squared value is .76, which is quite high for only 2 factors predicting funding. Additionally, according to an ANOVA analysis (See Appendix: ANOVA of Model 8), cases was able to predict about three times the variance that Google searches could, yet both remained integral predictors of ACS funding. Below is a graph of cases, Google searches, and ACS funding. The plane is the regression plane we modeled before that factored in time effects (see Appendix: Model 15). Red lines connote positive residuals, which means that the observed ACS funding is more than the expected ACS funding (with blue lines being negative residuals). Overall, we can see that these two variables are very good predictors of ACS funding. Third, NCI was an excellent sole predictor of ACS funding (See Appendix: Model 16) with a p value of approximately 0.0000 and an R-squared value of .85. This model by itself does not show whether there is some direct relationship between NCI funding and ACS funding, or whether the same things that affect NCI funding also affect ACS funding. However, when you insert NCI funding into the model we were using to predict ACS funding (see Appendix:

Columbia Economics Review

21 Model 17), NCI funding and cases remain statistically significant factors while Google searches is no longer a factor.

when analyzing ACS funding, cases and Google searches were also good predictors of ACS funding

Discussion Factors Impacting NCI and ACS Funding The results of our analysis show that there is an association between cases and Google searches to both NCI and ACS funding. Despite this, there are a few potential problems with this model. Several of these variables have multicollinearity issues which means that the covariates are also correlated to each other (such as deaths being correlated to both cases and 5-year survival rates) and potential problems might have arisen because of this. Additionally, it is quite possible that funding impacts some or all of the covariates to a certain extent. For example, diseases with more funding might have more research done which causes more publications on the disease and thereby more Google searches. This problem exists for all the covariates to various extents and thus it is unclear whether these relationships are causal or confounding. There are also problems with the dataset that was used. The dataset is only using cancers that the NIH has a line-item for, which means that no rare cancers are utilized (the smallest has 8,000+ cases). Thus, while this analysis can be used to analyze the effect of cases upon cancers that affect a large number of people, it cannot be extrapolated to an analysis of rare cancers. Additionally, very few cancers in our dataset affect children and the ones included might be outliers. Therefore, we might see a relationship if more childhood cancers were included in the dataset. There are also other potential omitted variables. There have not been many


22 studies about what factors influence NCI budget allocation nor any major policy change because the NCI claims to not use “predetermined targets for a specific disease area or research category.” Therefore, there was no past information to base what covariates to employ in our analysis, and there is the possibility that we excluded a covariate. We were also unable to get data for some potential variables. For example, the NIH and NCI both claim to factor in the potential for research as a component of allocating funding, yet we do not have a proxy for which diseases have more potential for research, such as the number of grant proposals for each type of cancer, or the number of patent proposals. Additionally, there may be other quantifiable variables omitted in the future. Therefore, more research would need to be undertaken on making sure there are no relevant omitted variables in the future. There also may be factors omitted from our analysis that are impossible to quantify and model. The chart below (which uses 2015 numbers) shows, lung cancer has the second lowest amount of funding per case, and the lowest amount of funding per death. To a certain extent, the reason for this is probably at least partially due to the fact that lung cancer would drop 85% if people were to stop smoking (“Where do the Millions of Cancer Research Dollars Go Every Year”). Whether lung cancer is receiving less funding because doctors view there to be less scientific potential for new discovery, or whether the NCI thinks it is less worthy of funding research for a cancer that people largely bring upon themselves, or whether there is some other factor is unclear. Yet, the fact that lung cancer deaths would drop substantially if people stopped smoking clearly has some unquantifiable effect on their funding. There are potentially other factors that would also be hard to quantify and by extension model. Despite these potential problems, this analysis may provide interesting details about what is driving NCI funding for different cancer sites. The only two significant factors we discovered, after controlling for other factors, are cases and Google searches. On the other hand, deaths, 5-year survival rates, median age, and childhood cancer seem to not affect funding. While we do not seek to evaluate what criteria should be used to drive funding, this validates the Atlantic author’s analysis which complains that the NCI gives additional funding to cancers

Spring 2016 that are more common but does not factor in the age of the diseased population. How NCI Funding Impacts ACS Funding We also established that the same factors that impact NCI funding also impact ACS funding. Additionally, there is some correlation between NCI funding and ACS funding, and this relationship exists even after controlling for cases and Google searches. However, it is quite unclear whether NCI funding has any effect on ACS funding. Instead, NCI funding might be correlated with ACS funding because of omitted variable bias. Furthermore, this relationship is only a correlation, and ACS funding might have an effect on NCI funding. Another problem is that we are using ACS funding as a proxy for the overall non-profit funding for research. However, this is probably not a perfect proxy as the ACS is one of the few charities that are devoted to a wide variety of cancer sites, and thus has the decision on how to allocate their funding. Instead, most people donate to charities that focus on one type of cancer so more research should be done on evaluating how much money is donated to each type of cancer, and whether NCI funding affects that.

On the other hand, deaths, 5-year survival rates, median age, and childhood cancer seem to not affect funding Future Research On another note, while we have evaluated to some extent how NCI funding affects non-profit funding, we have not analyzed how NCI funding affects pharmaceutical spending. We would assume that pharmaceutical companies typically build off the basic research that the NCI conducts and then utilize the research that seems most likely to lead to viable drug development. If that were the

Columbia Economics Review

case, we would expect there to be more research done on cancer sites that the NCI heavily invests in, once factoring out other factors such as potential markets for drugs, etc. However, this might not be the case as pharmaceutical and biotechnology companies do some basic research in the private sector as well, so more work would have to be done in analyzing whether this is the case.

On the other hand, deaths, 5-year survival rates, median age, and childhood cancer seem to not affect funding Conclusion We have established that there is some sort of association between cases and Google searches to NCI funding. This could potentially mean that the main factors in determining NCI allocations to various diseases are how many people obtain the cancer and the public interest in the disease. Additionally, we also established that cases and Google searches potentially impact ACS funding as well. Furthermore, ACS funding has a really significant relationship to NCI funding. Nonetheless, it is unclear whether ACS funding is impacted by NCI funding, or whether the same factors that impact ACS funding also impact NCI funding. n Citations “ACS Cancer Type Funding.” Message to the author. N.d. E-mail. “Comprehensive Cancer Information.” National Cancer Institute. N.p., n.d. Web. 22 Dec. 2015. <http://www.cancer.gov/>. “Government Funding for Cancer Research.” LoveToKnow. N.p., n.d. Web. 22 Dec. 2015. <http://charity.lovetoknow. com/Government_Funding_for_Cancer_ Research>. “HHS.gov.” HHS.gov. N.p., n.d. Web. 22 Dec. 2015. <http://www.hhs.gov/>. “How Much Money Is Spent on Cancer Research per Year?” - Quora. N.p., n.d.


Spring 2016 Web. 22 Dec. 2015. <https://www.quora. com/How-much-money-is-spent-on-Cancer-research-per-year>. “How Much Money Is Spent On Cancer Research.” Nanomedicine. N.p., n.d. Web. 18 Dec. 2015. <http://www.nanomedicinecenter.com/article/how-much-money-isspent-on-cancer-research/>. “International Association for the Study of Lung Cancer.” Department of Defense Lung Cancer Research Program Funding Opportunities for Fiscal Year 2015. N.p., n.d. Web. 22 Dec. 2015. <https://www.iaslc. org/research-education/funding-announcements/department-defense-lung-cancerresearch-program-funding>. “Nearly 1,000 Cancer Drugs in Development in USA.” Pharma Times. N.p., n.d. Web. 22 Dec. 2015. <http://www.pharmatimes.com/article/12-06-01/Nearly_1_000_ cancer_drugs_in_development_in_USA. aspx>. “Research Dollars - Breast Cancer Consortium.” Breast Cancer Consortium. N.p., n.d. Web. 22 Dec. 2015. <http://breastcancerconsortium.net/resources/beyond-awarenessworkbook/background/research-dollars/>. “Surveillance, Epidemiology, and End Results Program.” Surveillance, Epidemiology, and End Results Program. N.p., n.d. Web. 22 Dec. 2015. <http://seer.cancer. gov/>. “Vision.” Breast Cancer Research Program, Congressionally Directed Medical Research Programs. N.p., n.d. Web. 22 Dec. 2015. <http://cdmrp.army.mil/bcrp/>. American Cancer Society. N.p., n.d. Web. 22 Dec. 2015. <http://www.cancer.org/>. Contributor, Quora. “Where Do the Millions of Cancer Research Dollars Go Every Year?” Slate. N.p., n.d. Web. 22 Dec. 2015. <http://www.slate.com/blogs/quora/2013/02/07/where_do_the_millions_of_ cancer_research_dollars_go_every_year. html>. Lee, Bruce Y. “How The $2 Billion NIH Budget Increase Benefits You.” Forbes. Forbes Magazine, n.d. Web. 22 Dec. 2015. <http://www.forbes.com/sites/brucelee/2015/12/21/how-the-2-billion-nihbudget-increase-benefits-you/>. McGeary, Michael, and Michael Burstein. “Sources of Cancer Research Funding in the United States.” JNCI Journal of the National Cancer Institute 91.14 (1999): n. pag. Web. <https://iom.nationalacademies.org/~/media/Files/Activity%20Files/Disease/NCPF/ Fund.pdf>. National Institutes of Health. U.S. National Library of Medicine, n.d. Web. 22 Dec. 2015. <http://www.nih.gov/>.

Read, Zoe. “Our Disproportionate Focus on Adult Over Pediatric Cancer Research.” The Atlantic. Atlantic Media Company, 02 Jan. 2013. Web. 22 Dec. 2015. <http://www. theatlantic.com/health/archive/2013/01/ our-disproportionate-focus-on-adult-overpediatric-cancer-research/266684/>. “Google Trends.” : Phonofile. N.p., n.d. Web. 22 Dec. 2015. <http://phonofile.com/ tools/online-tools/google-trends/>.

23 ANOVA of Model 8 Models with ACS as dependent variable Model 9

Model 10

Appendices Models with NCI as dependent variable Model 1

Model 11

Model 12 Model 2

Model 13

Model 3 Model 14

Model 4

Model 5

Model 15

ANOVA of Model 15

Model 6 Model 16

Model 7

Model 8

Columbia Economics Review

Model 17


Spring 2016

24

Federal Reservations Macroeconomic Cycles and the Stock Market’s Reaction to Monetary Policy from 2000-2015 Gelila Bekele Columbia University We are very excited to include Gelila Bekele’s work because of its elegant synthesis of economic insight, mathematical rigor, and simple intuition in presenting an oftentimes complex and opaque subject. Here, Bekele provides an analysis of the impact of federal monetary policy on stock and asset prices during recession periods. In following the evolution of Chicago Fed National Activity Index and different industry portfolios, she demonstrates how tight credit market conditions during recessions can cause amplified and accelerated effects of monetary policy. She notes that the 2008 Financial Crisis serves as an important exception to this trend, where drastic use of Federal Open Market Operations might have signaled financial turmoil to market participants. Whereas many authors explore federal monetary policy from an almost technocratic and inaccessible perspective, simply due to the complex nature of the subject, we found that Bekele approaches the topic with clarity and concision. We hope that this piece inspires further research on monetary economics and market behavior under recession-like conditions. -J.A.B.

Introduction Monetary policy decisions have immediate and direct impacts on the financial markets. By changing economic variables such as the federal funds target rate or the discount rate, the Federal Reserve aims to modify economic activity and consumption in order to achieve its ultimate objectives of maximum employment, stable prices and moderate long-term interest rates. Since monetary policy has a direct and significant impact on economic activity, it is crucial for investors and policymakers alike to understand how asset prices react to monetary policies. The stock market is an important channel of monetary policy that is used by the Federal Reserve to influence real economic activity. When interest rate increases, the cost of borrowing for investment by firms increases, resulting in a slowdown of economic activity. As well as this, this increased cost of borrowing would increase

a firm’s cost of capital, and thereby resulting in a depression of the stock’s valuation and price. A rise in interest rates would also promote future over current consumption. Past studies have examined the stock market’s reaction to economic news in different economic states. Basistha et al. (2008) argue that there is a significant cyclical variation in the impact of monetary policy on stock prices. They find that the response of stock returns to monetary shocks is more than twice as large in recessions and tight credit conditions as in good economic times. Previous studies on the US stock markets have examined the reaction of the stock market to monetary policy before the financial crisis. This paper analyzes the impact of Federal Funds rate surprises on stock returns from 2000 to 2015 including the period of the recent financial crisis of 2008. This research paper studies the average reaction of the stock market in the aggregate

Columbia Economics Review

S&P 500 index, at the level of Fama and French industry portfolios, and in a panel of stocks.

there is a significant cyclical variation in the impact of monetary policy on stock prices We find that prior to the crisis, stock prices increased as a response to unexpected federal funds rate cuts. This paper argues that stocks have larger increases when interest rate changes coincided with recessions and tightening credit


Spring 2016 market conditions. However, the 2008 financial crisis is an exception to this trend, where stock market participants did not react positively to unexpected FFR cuts. This result is inline with the findings of Kontonikas et al.(2012) who emphasize that the deteriorating macro-financial conditions of the financial crisis resulted in contradictory reactions of stock market prices to federal funds rate changes. Their research paper shows the severity of the recent financial turmoil episode and questions the effectiveness of monetary policy when federal funds rate reach the zero lower bound level. Background and Related Literature Federal Reserve Setting the Federal Funds Target rate (FFR) is the most important tool that the Federal Reserve has in accomplishing its objectives of stable prices and moderate long-term interest rates. The FFR refers to the interest rate at which depository institutions lend funds at the Federal Reserve to other banks overnight. The changes in the FFR have a significant impact on other interest rates in the financial system and affect real economic activity.

The Federal Open Market Committee (FOMC), the main policymaking body of the Federal Reserve, convenes eight times a year, with six to eight weeks between meetings to assess the current market conditions. Although the FOMC does not directly set the actual interest rates, it establishes a set target interest rates and performs open market operations such as the purchasing and selling of U.S Treasury and federal agency securities in order to achieve the set target. If the FOMC is concerned about the low state of economic growth, the Fed will reduce the FFR, which makes borrowing less expensive and thus generating more funds for banks to lend to firms. This stimulates economic activity and impedes the prospects of a recession. On the other hand, to reduce inflation, the Fed would increase federal funds target rate, which would make borrowing more expensive, cooling down economic activity. State dependence of stock price reaction to monetary policy Past studies predict that tight credit market conditions accelerate the effect of monetary shocks on the economy. Dur-

Columbia Economics Review

25 ing tight credit market conditions, there is a significant decrease in the supply of bank credit, reducing the economic activity of firms. The lack of credit affects the strength of firms’ balance sheets, which also results in the deterioration of their creditworthiness, making it more difficult to issue bonds on the market. During such a state of adverse market conditions, a surprise monetary easing would reduce the restrictions on the availability of credit and result in a large increase in the level of economic activity. It is historically observed that a similar easing in monetary policy implemented during a period of growing market conditions would have limited effects on the level of economic activity; this is what we aim to prove in the study. Federal Fund Futures There are two important concerns that make it challenging to identify the reaction of stocks to monetary news. Firstly, equity prices tend to incorporate the market’s anticipation of policy changes and the macroeconomic condition. News and new published research about the economic outlook have an impact on both


Spring 2016

26 short-term interest rates and asset prices. Theoretically, stocks are claims to shares of a company, and hence the valuation of stocks should be independent of monetary policy and should only depend on the returns of the company in the long run. However, in the short run, economic announcements affect daily share prices if the new information revealed by the announcement affects either the expectation of future dividends or discount rates. Therefore, the price of a stock (Pt) today is the expected value of the future stream of dividends (Dt+r ) discounted to the present using the prevailing market rate (rt) conditional on information available at the time (Ωt).

Hence when the fed changes the rate, the stock returns may not react as much if the market had anticipated this increase before the announcement. Therefore, we must measure how the stock market reacts to unanticipated “surprise” changes that have not already been priced into the stocks.

tight credit market conditions accelerate the effect of monetary shocks on the economy

to construct a measure of monetary policy shocks or “surprises”. The 30- Day Federal Funds Futures contracts are traded on the Chicago Board of Trade and provide real-time information about investors’ expectations of the future interest rates. The contract is based on the monthly average of the effective federal funds rate, which is a close approximation of the average target rate. This makes the FFR futures contract a good gauge of the surprise change in the target federal funds rate.

Theoretically, stocks are claims to shares of a company, and hence the valuation of stocks should be independent of monetary policy It is important to note that monetary policy does not only bring about a reaction in the stock prices when the Fed surprises the markets. Asset prices will also react to revisions in expectations about future policy, which may be caused by news about changing economic conditions. However, the focus of this research

is on unexpected policy action, because it will allow us to clearly discern the stock market’s reaction to monetary policy.

monetary policy does not only bring about a reaction in the stock prices when the Fed surprises the markets Sample selection and key variables Sample selection Percentage changes in the closing prices of the S&P 500 Index during the day of an FOMC meeting and one business day prior to the meeting are used to estimate the response of stock prices to unexpected FFR surprises. Even though there are over 5,700 days in our sample period, we estimate how the S&P500 index responds to news only for the 199 days for which a federal funds rate announcement is made. We use an event study approach is used by examining a sample period that extends over 15 years from 2000 to 2015 and includes 199 announcements made by the FOMC regarding the federal funds target rate. From 2000 to 2015, there have

Figure 1: The S&P 500 daily return and effective federal funds rate from 2000-2015

The second concern that makes it difficult to identify the reaction of stocks to federal funds rate changes is that shortterm interest rates may be concurrently affected by movements in asset prices, which causes endogeneity problems. Therefore, in order to discern the effect of announcements, it is important to distinguish between the unexpected monetary policy changes from the expected actions already captured by the market prices. A methodology proposed by Kuttner (2001) is an event study approach, which uses federal funds futures contracts data

Columbia Economics Review


Spring 2016 Figure 2: The CFNAI 3-Month moving Average from 2000 to 2015

been 37 changes in the federal funds rate, 20 of which were contractionary (change in interest rate >0) and 17 were expansionary. The average interest rate change (as measured from the federal fund futures change rate) was -0.047 ranging from a minimum of -.75% to a maximum of 0.5%. The Financial Crisis of 2008 The sample period for the research includes the financial crisis of 2008. From June 30, 2004 until June 29, 2006, the FOMC raised the target federal funds rate by 0.25% in 17 meetings until it reached 5.29% in early 2007. However, during the summer of 2007, the fall in housing prices and worsening conditions in financial markets caused by difficulties in refinancing subprime mortgages caused significant unrest in the financial markets, marking the beginning of the financial recession.

refinancing subprime mortgages caused significant unrest in the financial markets, marking the beginning of the financial recession

Northern Rock occurred and the FOMC reduced the FFR by -0.5% for the first time since 2003. This was followed by a series of additional cuts from 2007 until 2008, which reduced the FFR to 4%. The S&P 500 had declined by more than 50% since September 2007 as seen in Figure 1. The financial crisis began to subside in March 2009, when the Fed’s expanded its quantitative easing policy. During this time, the Fed issued press release statements signaling that the FFR would be kept at the zero bound for a significant period of time, which guarantees the market a long period of low interest rates. As such, in this study, we date the start of the financial crisis at September 2007 and the end of the most severe phase in March 2009. In the aftermath of the Lehman Brothers collapse in the fall of 2008, stock market investors have seen falling stock prices together with sharp cuts in interest rates. See-

27 ing that interest rate cuts by the FOMC had no positive impact on stock prices during the financial crisis may seem surprising since before 2007 there was a strong statistically significant negative relationship between interest rates and equity market performance. This inverse relationship is documented in several studies conducted by Kuttner (2001), Bernanke (2005), and Basistha (2008). However, between 2007 and 2009, the market was faced with an unprecedented deterioration in financial assets, and hence we find that during this period the effect of FFR cuts on stock performance was positive and statistically insignificant. This suggests that the monetary policy strategy of changing federal funds rates to alter economic activity may not have worked as intended during the financial crisis, which further emphasizes the severity of the financial crisis. Measuring the surprise element of the federal funds target rate The studies conducted by Kuttner in 2001 and by Bernanke and Kuttner in 2005 utilized data from FFR futures contracts in order to derive the unexpected component of the FFR change. The settlement price of the federal funds futures is based on the average federal funds rate during the contract’s month. The surprise component of the target rate is computed using the change in the rate of the federal funds futures on the day of the Fed policy decision and is described in equation 2.

Figure 3: S&P 500 Returns and BofA Merrill Lynch US High Yield option adjusted spread (2000-2015)

The start of the financial crisis is dated at September 2007 when the bank run at

Columbia Economics Review


Spring 2016

28

Figure 4: Scatterplot of daily stock returns and federal funds target rate surprises. (2000-2015)

justed spread. The BofA Merrill Lynch US High Yield Master II Index value tracks the performance of US dollar denominated below investment grade rated corporate debt publically issued in the US domestic market. We set dummy variable, Dtstate, equal 1 when the spread exceeds its full sample historical average, which signifies high credit risk periods and zero otherwise. Figure 3 graphs the S&P 500 return with the spread. When the S&P500 index declines during the financial crisis, the spread increases, signifying that the recession was marked by tight credit constraint. Empirical Results Baseline Results In order to gain a thorough understanding of the panel data regressions with individual stocks, Basistha and Kurov (2012) begin their empirical analysis with aggregate level regressions of the S&P500 index. In line with this research, we will begin the empirical study with regressions on the S&P 500 index, followed by regressions on industry level indices and finally proceed to panel data regressions on an array of stocks. A scatter plot of the federal funds surprises and daily returns on the S&P 500 index is shown in Figure 4. We estimate the following regression of the S&P500 index return on the unexpected component of the change in the federal funds target rate: Rt = α + β∆iut + εt

5

S&P 500 index return, %

4

-­‐0.5

3 2 1

-­‐0.4

-­‐0.3

-­‐0.2

-­‐0.1

0

0

0.1

0.2

0.3

-­‐1 -­‐2 -­‐3 -­‐4

Funds rate surprises, %

“∆iut”is the unexpected federal fund target rate change and “ft” is the federal funds rate implied in the settlement price of the current month federal funds futures contract on the day of the FOMC meeting. “ft-1” is the current month federal funds futures contract implied a business day prior to the FOMC meeting. “ft” and “ft-1”both represent 100 minus the contract price. “D” is the number of days in the month and “d” is the day of the month of the Fed policy decision. Since the contract’s settlement price is based on the monthly average federal funds rate, the change in the futures rate is scaled up by a factor related to the number of days in the months affected by the change: D/ (D - d) Business cycle measures In order to examine our hypothesis about the performance of stocks in different economic conditions, we use a proxy for the economic state. The Chicago Fed National Activity Index (CFNAI) utilizes 85 economic indicators to map the business cycle of the economic period. According to the Chicago Fed (2000), a drop of the 3-month moving average of the CFNAI below -0.7 indicates a significant probability that a recession has begun. An increase of the 3-month moving average of the CFNAI above 0.2 indicates a significant probability that a recession has ended. The 3-month moving average of the CFNAI dropped below -0.7 in January 2001 and then crossed 0.2 in September 2003. The average dropped again below

-0.7 in March 2008 and then crossed 0.2 in March 2010. We set the CFNAI recession dummy, Dtstate , equal to one during the periods between January 2001 and September 2003 and March 2008 and March 2010.

The Spread between the high-yield bonds and AAArated bonds is a proxy used to measure the credit market condition Measures of Credit Market Conditions In addition to using the CFNAI business cycle measure as a proxy for economic state, we also use the measure of aggregate credit market conditions in order to test the hypothesis of state dependence in stock market reaction to monetary news. The spread between the high-yield bonds and AAA-rated bonds is a proxy used to measure the credit market condition. This data is obtained from the St. Louis Fed website in the dataset labeled BofA Merrill Lynch US High Yield option ad-

Columbia Economics Review

The full sample results from 2000 to 2015 indicate that the stock market reacts in tandem with the federal funds rate, which is contradictory to the reaction of the stock markets observed historically


Spring 2016

29

Table 3: Response of daily stock returns to target rate changes

Table 4: Response of daily stock returns with variable for the financial crisis period

Baseline Regressions Regression 1 2000-2015 Change in unexpected

Constant

R-squared F Observations

Regression 2 2000-2007

Regression 3 2007-2009

Regression 4 2009-2015

0.199 (0.119)

-4.686* (1.957)

0.301*** (0.0609)

0.425*** (0.0949)

0.257* (0.0998)

0.0784 (0.111)

0.380** (0.141)

0.974* (0.386)

0.006 24.48 127

0.018 20.09 38

0.002 2.808 199

0.090 5.734 72

Standard errors in parentheses * p<0.05, ** p<0.01, *** p<0.001 .Kontonikas et al. (2012) define the return, Rt, as the first difference of the natural log of the S&P500 index (St) on the close of the day of the FOMC meeting and that of the business day prior to the meeting: Rt= 100*(ln St – ln St-1) This equation is estimated using OLS with the White Heteroskedasticity consistent covariance matrix, which maintains robustness in the presence of a large number of outliers. The OLS estimation results with Heteroskedasticity-consistent standard errors are presented in Table 3. The full sample results from 2000 to 2015 indicate that the stock market reacts in tandem with the federal funds rate, which is contradictory to the reaction of the stock markets observed historically. The results for the full sample regression are inconsistent with the regressions from previous literature conducted by Bernanke and Kuttner in 2005 and Basistha and Kurov (2008). Basistha and Kurov found the coefficient of the estimate target rate surprise to be negative. However, the more recent research conducted by Kontonikas et al in 2012, for a sample period between 1989 and 2009 (including the financial crisis period) also find that the surprise component of FFR changes was statistically insignificant. By utilizing sample periods that did not include the financial crisis period, Kontonikas et al (2012) obtain statistically significant estimates of the effect of surprise federal funds target rates on the returns. Similarly in our research, the coefficient for the sub-sample period before the financial crisis (from 2000-2007) is negative and statistically significant at the 5% level. This value is consistent with past studies and signifies that an unexpected federal funds rate hike of 1% would result

in a 4.68% decline in the S&P500 returns. The coefficient for the surprise monetary policy for the sub-samples during the financial crisis (2007-2009) and after the financial crisis (2009-2015) is statistically significant, suggesting a positive correlation between the federal funds rate and the stock returns. This is contrary to the purpose of this monetary policy strategy, as an increase in interest rate is intended to diminish stock market performance. Hence, it seems that the inclusion of the financial crisis period in the sample leads to results that are inconsistent with the intentions of monetary policy. Structural change during the financial crisis From the first baseline regression, we have observed that during the financial crisis, monetary policy did not bring about the expected reactions in the stock market. Kontonikas et al formally examine whether the contradictory coefficients on the surprise changes seen in the full sample regressions can be explained due to structural instability in the stock market responses. In order to do so, we will interact the federal funds future surprises with a dummy variable which captures changes in the relationship between stock returns and FFR shocks during the financial crisis time period (Dtcrisis). Rt= α + [β1(Dtcrisis) +β2(1- Dtcrisis)] ∆iut+ εt Dtcrisis is a dummy variable that is equal to one during the financial crisis between September 2007 and March 2009. In Table 4, the stock market response to the unexpected FFR changes during the crisis period as indicated by β1 is positive. Consist-

Columbia Economics Review

Table 5: Regressions with Macroeconomic State and control for Financial Crisis Period

ent with the findings of Kontonikas et al, we find that accounting for the financial crisis period in the impact of FFR surprises leads to an increase in the R2 from 0.2% from the full sample regression in Table 3 to 1.92% in Table 4. This result indicates that since September 2007 unexpected FFR cuts were perceived as bad news by stock market investors. However, during the period not including the crisis period, β2 is negative, which implies that an unexpected 1% cut in interest rate would be associated with a 4.45% increase in the S&P 500 index. Kontonikas et al. suggest one possible reason for the result found in the financial crisis period. Historically low interest


Spring 2016

30 rates may be seen as a sign of the desperation o central bankers and a signal that profits in the future will be lower, thereby signaling bad news for equities. This may be a possible reason for stocks to decrease when interest rates decrease.

State dependence in the stock market response

After having examined the S&P 500 returns in different sample periods, we will now examine the economic state dependence of the stock market performance. FFR changes in periods of reces-

sion should have more prominent effects on the stock market reaction than during periods of economic growth. The following regression is estimated: The Dtstate recession dummy will be used initially as a proxy for the CFNAI business cycle measure. This dummy variable is set equal to one during the periods between January 2001 and September 2003 and March 2008 and March 2010, which are periods of economic recessions. In a second regression, Dtstate will serve as a proxy for the credit spread. Dtstate will equal 1 during periods of tight credit constraint when the spread exceeds its full sample historical average, which signifies high credit risk periods and zero otherwise. The regression with Dtstate as a

proxy for CFNAI is shown in Table 5 column 1, and Dtstate as a proxy for high yield spread in column 2. In addition, Dtcrisis is equal to one between September 2007 to March 2009.4 The results in Table 5 reveal that during periods that do not include the financial crisis period (crisis=0), the stock market exhibits economic state dependence. The response to FFR surprises during recessions β2 (Macrostate=1 and Crisis=0) is greater than the stock market response in “good” economic times β1 when the recession variable Dtstate is set to 0 (Macrostate=0 and Crisis=0). During periods of economic growth, the stock markets reacted by 1.668% for a 100 basis point cut in FFR while in recessions, the stock

Table 6:

Table 7: Response of six Fama and French industry portfolios to unexpected FFR changes using High yield to Treasury bond credit spread as a proxy for economic state (1) food

(2) oil

(3) trans

(4) finan

(5) machn

(6) cnsum

cred0crisis0

-1.471 (1.311)

-0.652 (3.290)

-1.013 (1.882)

-1.375 (2.084)

-1.609 (2.971)

-1.972 (1.495)

cred1crisis0

-1.797 (3.229)

-4.351 (3.625)

-12.49*** (3.212)

-8.951 (4.998)

-15.34** (5.031)

-8.629*** (1.964)

0.157** (0.0496)

0.535*** (0.0310)

cred1crisis1

0.253*** (0.0358)

Constant

0.0783 (0.0786)

R-squared F Observations

0.009 17.04 195

-0.431*** (0.0444)

-0.176*** (0.0468)

0.361** (0.124)

0.309** (0.108)

0.010 31.97 195

0.039 9.496 195

Standard errors in parentheses * p<0.05, ** p<0.01, *** p<0.001

Columbia Economics Review

0.580*** (0.0856) 0.396* (0.156) 0.017 16.44 195

0.321* (0.123) 0.044 7.027 195

0.209** (0.0759) 0.063 110.2 195


Spring 2016 returns surged to 10.12% increase for the same 100 basis point cut. The finding of the state dependence is confirmed by the Wald test results H0: β1 =β2. The strong negativity of B2 estimates reinforces the idea that prior to the crisis period expansionary interest rate had more intense effects during difficult economic times than during periods of economic growth. Kontonikas et al write that an important structural shift had occurred during 2007-2009 concerning the impact of FFR shocks during bad times. The Wald test results show that the null hypothesis β2 =β3 is strongly rejected. Therefore, the reaction of stocks during periods of economic recession (as measured by CFNAI and the credit spread) before the financial crisis yielded different results than those during the crisis period. Before the financial crisis period, there were periods of recession and low economic activity, but none compared in severity to the 2008 crisis. Hence, the hypothesis that stocks react much more intensely during periods of recessions is only true during the time period before the financial crisis. Regression with the Fama and French Industry Portfolios After regressing the S&P 500 index by controlling for the financial crisis time period and examining the economic state dependence of stock returns, we will now use the Fama and French industry portfolio index to examine the difference in stock reactions among different sectors. Bernanke and Kuttner (2004) conducted similar industry level regressions in order to examine how different sectors react to unexpected federal funds rate changes by using a model that incorporates the CAPM model of stock returns. We will use similar regressions as in Section 4.3 for the industry portfolios. Table 6 reports estimates of six Fama and French industry portfolios constructed from CRSP returns. In Table 6, before the financial crisis (crisis dummy is 0) and during an economic period marked as a growth period (cfnai recession dummy is 0), all the industries in Table 6 have negative but relatively minor coefficients, meaning an interest rate cut would increase the return of the industry portfolios. However, during periods marked as recessions, the effect of fund rate on the stock return is significantly higher than during periods of economic growth. The transportation and machinery industry have the highest coefficients. The

transportation portfolio changes by -1.127 during good economic times and by -12.68 during a recession for a 1% increase in federal funds rate. Similarly, the machinery portfolio reacts by -2.169 during good economic times and by -14.73 during recessions. From Table 6, we observe that the most responsive industries are machinery, transportation and financials. On the other end of the spectrum, oil and food are not as responsive to unexpected federal funds rate changes. During good economic times, the food portfolio reacts by -0.967 to a 1% increase in federal funds rate and by -2.812 during a recession. The reaction of the food industry during good and bad economic states is not as significantly intense as the transportation or machinery portfolio. This may be due to the fact that oil and food are necessities and the demand for these goods has very low elasticity. The low R2s indicate that very little of the industries’ variances are associated with unexpected federal funds rate changes. However, the precision of the coefficients shows that it is not sufficient to reject the hypothesis of an equal reaction for all 6 industries. Table 7 reports the estimates of the six industry portfolios using credit spread as a measure of economic state. Hence, in Table 7, cred0crisis0 signifies a time period not including the financial crisis period (crisis is 0) and a period with low credit spread (cred is 0). The two measures of economic state present us with very similar results for the different industry portfolios.

The reaction of the food industry during good and bad economic states is not as significantly intense as the transportation or machinery portfolio In both Table 6 and 7, the cfnai1crisis1

Columbia Economics Review

31 variable, which focuses on the time period during the financial crisis shows that all coefficients have values less than 1. Most of the values are positive while the coefficients for oil and transportation industry portfolios are positive. We have not been able to make realistic inferences about the time period during the financial crisis for the S&P 500 index. Similarly, when using the Fama and French industry portfolios, the change in stock returns are significantly smaller than the change in returns during the periods before the financial crisis. This results serve as further evidence of the severity of the financial crisis period. Regression with panel data Regressions on the S&P 500 index and the Fama and French industry portfolios exhibited economic state dependence during sample periods before the financial crisis. We have also observed that during the financial crisis period changes in stock returns to interest rate hikes were inconsistent with monetary policy strategy. The intensity of the returns also differed from one sector to another with capital intensive sectors reacting significantly more than industries such as the food and the oil sector. Now that we have laid the basis for understanding the factors involved in the regressions, we can proceed to conducting panel data regressions on a panel of 25 firms. The returns of these firms were obtained from the Bloomberg database. There are 199 observations for each company from 2000 to 2015. In line with Bathista and Kurov’s (2008) panel data study, we also use pooled OLS regressions for the panel regression. The regressions are pooled with robust standard errors clustered around the companies. The firm-specific characteristics added to this regression are the industry sectors of each firm and a dummy variable to measure the individual firm credit constraint. Rit= α + β1∆iut + β2∆iutXit + (ΣβsectorsDsectors) ∆iut + εi,t The regression result used for Table 8 is modeled in equation 6 with the variables representing the following variables: Rit =Daily return of stocks ∆iut = Unexpected change in federal funds rates Dsectors = Dummy variable for industrial sectors Xit = firm specific dummy variable for credit constrained firms The variable Xit is a dummy variable taking the value of one if firm is credit constrained based on the Bloomberg fi-


Spring 2016

32 Table 8: Panel data regression without accounting for economic state and the financial crisis

Table 9: Panel data regression after accounting for the financial crisis period and the economic state

(1) Daily Return dailyreturn

changeinun~d

-0.169 (0.230)

Unexpected Change X chngXDcred~g Credit Constraint Dummy

0.0274 (0.347)

chngXcnsst~s Consum Staples

-0.00121 (0.260)

chngXcnsdisc Consm Discretionary

0.604* (0.288)

Energy chngXenergy

0.0352 (0.268)

Health care chngXhealt~e

Info Tech chngXinfot~h Financials chngXfinan~s Materials chngXmater~s Telecom chngXtelecom Const _cons

N

0.937*** (0.238) 0.499 (0.352) 0.277 (0.612) -0.412 (0.230) 0.879* (0.417) 0.254*** (0.0197) 4975

Standard errors in parentheses * p<0.05, ** p<0.01, *** p<0.001

nancial data from the previous year. It is important to note that the firms included in the Dow Jones index are relatively large and financially lucrative firms. Therefore, in this context, the concept of being financially constrained is a relative rather than an absolute concept. The firms classified as financially constrained are simply more financially constrained than other firms in the sample. The sectoral heterogeneity is modeled using 7 sectoral dummies. The base sector is the industrial sector and its response to monetary news is given by B1 in the absence of any other factors. The coefficient B2 shows an additional response of a financially constrained firm to unexpected federal funds rate changes. In equation 6, we have not yet allowed for macro cycle effects in the response to monetary news and we have also omitted the dummy variable separating out the effects of the financial crisis time periods. In Table 8, we observe that the manu-

facturing sector (the base regression) declines only by 0.169 % for an unexpected 100 basis point increase in the federal funds rate. In addition, healthcare reacted with an increase of 0.768 (-0.169 + 0.937) with a 1% increase of FFR. A positive correlation between stock market return and interest rate is in line with neither what we have observed historically nor the monetary policy strategy. From the baseline regressions, we have observed that regressions on the S&P500 index and the Fama and French industry portfolios present fruitful and revealing results when the financial crisis and the economic state is accounted for. Therefore, Table 9 presents 3 panel data regressions all consisting of variables for the economic state and the financial crisis period. The first column of Table 9 uses the CFNAI business cycle as a proxy for economic state while the second column uses the high yield spread to measure the economic state. Hence the first three variables titled “Macrostate” represent

Columbia Economics Review

the CFNAI measures for the first regression and the credit spread for the second regression. The following equation captures the regressions presented in Table 9. Rit= α + β1∆iut (1-St)(1-Ft) + β2∆iut (St)(1Ft) + β3∆iut(St)(1-Ft) + β4∆iutXit + (Σβsectors Dsectors)∆iut+ εi,t(9) Rit =Daily return of stocks ∆iut = Unexpected change in federal funds rates Dsectors = Dummy variable for industrial sectors St = CFNAI business cycle measure dummy variable Xit = firm specific dummy variable for credit constrained firms Ft = dummy variable for the Financial crisis period In Table 9, the state dependence characteristics of stocks that we have observed with the indices in the previous regressions emerges. The results found


Spring 2016 using the pooled OLS and the Random effects models for panel data regressions are very similar. For the “Macrostate=0 x crisis =0” variable listed in the first row, which captures the time period before the financial crisis during a period of no recession, we observe negative yet low returns of -1.169 to an increase in FFR. On the other hand, in the next row marked: “Macrostate=1 x crisis =0”, we observe that during a period of recession before the crisis, stocks reacted much more intensely by -10.56 to the same 1% increase in interest rate. Ehrmann and Fratzscher show that the reaction of stock returns to monetary news is sensitive to the financial credit characteristics of the firms. However, the estimates of the interactive term between the monetary news and firm-specific financial constraint dummies in Table 9 show different evidence of such sensitivity. One reason for this difference is that we additionally allow for sectoral heterogeneity and the macro-cycle in the regression equation. The credit constraint variable’s low and statistically insignificant value of 0.027 can be accounted for by the fact that of the 25 firms, less than 15% of the firms were marked as credit constrained. Therefore, the credit constraint variable may not reveal insightful information due to the limitation of the number of firms that have this characteristic in the regression.

Interest rate changes will be most effective during periods of minor recessions The sectoral dummies exhibit a fair amount of heterogeneity across industries. During recessions, the materials sector declines by -11.0885%( -10.56% +-0.532), which has the highest decline in performance. The Consumer staples and Energy sectors follow closely next to the industrials sector. The sector that reacts the least is the telecom sector and the healthcare sector. These results are consistent with the findings of Ehrmann and Fratzscher (2004) on sectoral heterogeneity. The stronger response of cyclical

and capital-intensive industries can be explained by sensitivity of the demand for their products to interest rate fluctuations. Sectors such as telecom and healthcare may have less intense reactions due to the fact that the services offered by these industries are still needed during bad and good times alike, and hence have low elasticity to demand. For example, healthcare returns may have low elasticity for the reason that unlike the industrial sector, medical services will be needed in any time period. Summary and conclusion This study examines how stock returns react to federal funds rate changes in different economic states and across varying sectors. Initially, we analyzed the impact of unexpected federal funds rate changes on the returns of the S&P500 stock market index in different time periods. In the second regression, we included a dummy variable Dcrisis to separately examine the stock reactions during the financial crisis and the stock market reactions before and after the financial crisis (not including the crisis period). In the third set of regressions, we added a dummy variable (Dstate) which served as a proxy for the economic state. We used the business cycle as measured by the CFNAI and also the aggregate credit spread as measured by the BofA Merrill Lynch US High Yield spread in order to capture the economic state of the period. The regression results show that in the sample data before the financial crisis, there was a strong negative correlation between the unexpected federal fund rate surprises and the stock market reaction. We have also found that this reaction is dependent on the economic state of the market; during recessions, the stock market reacts significantly more to federal funds rate surprises than during periods of economic stability. Although firms react intensely to monetary policy news during recessions, they display an exact opposite reaction to their expected behavior when the recession is as disruptive as the 2008 Financial Crisis. This may be because during a period of great financial crisis, interest rate cuts may be viewed by market participants as a signal of market turmoil and only an attempt by the federal reserve to ameliorate the crisis. Hence, from the regression results, it can be concluded that interest rate changes will be most effective during periods of minor recessions. In addition, firms in differing industries react in different intensities to unexpect-

Columbia Economics Review

33 ed federal fund rate changes. Transportation and machinery have the highest reaction to interest rate changes during times of recession. On the other hand, sectors such as healthcare and food have very low reaction to monetary policy in both good and bad economic periods. Hence, how firms react to unexpected federal fund rate changes depends on the state of the economy and the industry sector in which they are grouped. n References Basistha, A., Kurov, A. 2008. Macroeconomic cycles and the stock market’s reaction to monetary policy. Journal of Banking and Finance 32(12), 2606-2616 Bernanke, B.S., Gertler, M., Gilchrist, S., 1996. The financial accelerator and the flight to quality. Review of Economics and Statistics 78, 1–15. Bernanke, B.S., Kuttner, K.N., 2005. What explains the stock market’s reaction to Federal Reserve policy? Journal of Finance 60, 1221–1257. Ekanayake, E. M., and Robin Rance. “Effects of Federal Funds Target Rate Changes on Stock Prices.” The International Journal of Business and Finance Research 2.1 (2008): 13-29. Ehrmann, M., Fratzscher, M., 2004. Taking stock: Monetary policy transmission to equity markets. Journal of Money, Credit, and Banking 36, 719–737. Gambacorta, L., Hofmann, B., Peersman, G. 2012. The effectiveness of unconventional monetary policy at the zero lower bound: A cross-country analysis. Working Paper No. 384, Bank for International Settlements. Kuttner, K.N., 2001. Monetary policy surprises and interest rates: Evidence from the federal funds futures market. Journal of Monetary Economics 47, 523– 544. Labonte, Marc, and Gail E. Makinen. “Federal Reserve Interest Rate Changes: 2000-2008.” CRS Report for Congress (2008): Congressional Research Services. Mcqueen, Grant, and V. Vance Roley. “Stock Prices, News, and Business Conditions.” Review of Financial Studies 6.3 (1993): 683-707. The Federal Reserve Bank of Chicago, 2000. CFNAI background release. Available from: <http://www.chicagofed.org/ economic_research_and_data/files/cfnai_ background.pdf>. White, H., 1980. A heteroskedasticityconsistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48, 817–838.


Spring 2016

34

Outwit, Outplay, Outlast Determinants of Employment Tenure Kayoung Lee, Tiancheng Liu, Yaxuan Wen, Ziyi Yan, Chenlei Zhuang Columbia University

As the authors explain in their research, the factors that determine employment tenure are of interest to all employers, employees, consumers and policymakers at large. Variables such as labor productivity, employment insecurity and unemployment are directly affected by labor market mobility. Lee at al.’s study helps bridge the gap between the existing data and the possible policies that governments and companies can follow to improve the conditions of the labor market. Additionally, by analyzing the dynamics between employment tenure and variables such as gender, race and sexual orientation, this paper is a step in the right direction towards a better comprehension and reduction of the discrimination in the workplace. While the present study is limited to certain populations and labor sectors within a relatively distant time frame, it is our hope that its rigor and thoroughness will inspire future studies to develop and evaluate policy tools that will lead the modern labor market to be more competitive, unbiased and fair. - M.F.P.

Research Question Our project examines the determinants of employment tenure, the amount of time that an employee has spent working on the same position. Employment tenure provides information about labor market mobility, an important economic parameter used by managers and policymakers to study labor and wage trends. On the microeconomic level, information about labor market mobility helps organizations budget appropriately for labor costs; on the macroeconomic level, labor market mobility affects worker’s employment insecurity and labor productivity, which would in turn affect consumption, unemployment rate and long-run output growth. Policy makers often use employment protection legislation to adjust an economy’s labor market mobility to an optimal level that balances worker’s insecurity and productivity (Auer et al., 2005). Our research focuses on employment tenure, one dimension of labor market mobility measurements and seeks to assist managers and employers in predicting or

controlling employee turnover rate. We use panel data (1970-2002) from a sample of college graduates from three liberal arts colleges to identify the determinants of employment tenure for fulltime employees. We use months at one

Policy makers often use employment protection legislation to adjust an economy’s labor market mobility to an optimal level that balances worker’s insecurity and productivity Columbia Economics Review

job as the measurement for employment tenure. In addition, as previous empirical evidence has shown discrimination in the labor market on the basis of race, gender and sexual orientation, we also investigate how the determinants of employment tenure differ for members of different groups. For the non-heterosexual group, we also analyze the effects of same-sex benefits, and whether or not the employer is aware of the employee sexual orientation. Literature Review The most relevant article for our research is a study by Mumford and Smith (2003) that identifies determinants of employment tenure from a sample of individuals collected from Britain and Australia. Their study includes explanatory variables such as demographics, education, job characteristics, occupations, and workplace environment (Mumford and Smith, 2003). The study shows that in both Britain and Australia, female and nonwhite workers have significantly


Spring 2016 shorter employment tenures than male and white workers respectively and age

Frequent job changes can strain long-term relationships, which suggests two-way causality. has a significant positive effect on tenure in both countries, even when the study controls for types of workplace as fixed effects. While we also use employment tenure as the dependent variable, we apply fixed effects at the individual level. Moreover, concepts related to employment tenure such as job stability, retention rate, occupation and job mobility might have determinants similar to those of employment tenure. Based on this strategy, more explanatory variables have been explored for our study. Ahituv and Lerman (2011) suggest that marriage significantly increases job stability because it requires greater commitment. They also find that frequent job changes can strain long-term relationships, which suggests two-way causality. In Marcotte’s (1999) research on the job stability trend from 1976 to 1992 in the United States, he relates retention rate to age, race and education. Based on this study’s categorization of four levels of education (high school dropouts, high school graduates, some college and college graduates), we group education by

level as well (Marcotte, 1999). Kronenberg and Carree (2012) find that having more children in the household has negative effects on job mobility. Finally, Kambourov and Manovskii (2008) show evidence from 1968 to 1997 that government workers tend to be more stable than non-government workers. Based on the previously mentioned literature, we chose to use marital status, salary, age, education, children, and employment sectors as the independent variables for our model. We include different advanced degrees as education dummy variables, as those degrees are included in our data. We also add a new variable called fired given that we are interested in understanding how the reason behind a job change might affect employment tenure.

Having more children in the household has negative effects on job mobility Existing literature does not explicitly discuss the effect of sexual orientation on job stability. Most studies related to employment discrimination on the LGBT (Lesbian, Gay, Bisexual, and Transgender) population, however, focus on wage discrimination. Leppel (2007) further investigates effects of sexual orientation on employment status—whether a person is employed, unemployed, or out of la-

35 bor force. Her findings suggest that the unemployment rate of same-sex partners is higher than that of heterosexual couples (Leppel, 2007). She also concludes that homosexual men are less likely to be employed and more likely to be out of labor force than heterosexual men. In our research, we explore the impact of sexual orientation in the labor market by testing if LGBT individuals significantly differ from the non-LGBT individuals in determinants of employment tenure. Since there is evidence that race and gender are correlated with employment tenure, we also test whether females differ from males, and whether non-whites differ from whites in employment tenure determinants in our regression (Mumford and Smith, 2003). Methods and Theory Based on our analysis above, we have identified several potential determinants of employment tenure, which can be broken down into more specific variables. When selecting variables for observation, we assume that an employee decides to resign not long before the actual resignation. Since the ending period observations are closer to the time when the decision is made than the beginning period observations, it makes more sense to use the ending period observations of our independent variables. However, there is one exception -- namely when the final salary is received after the decision of resignation is made. Hence, it is more accurate to look at beginning salary and the salary growth instead of ending salary. The monthly salary growth variable is constructed after adjustment with the Consumer Price Index.

There is evidence that race and gender are correlated with employment tenure The original dataset contains information over a 70-year period, so we limit the use of the data with two constraints. First, data before 1970 are dropped. This allows us to look at a sample representing current characteristics of the labor market at the cost of only losing around 200 out of the more than 20,000 total observations.

Columbia Economics Review


36

Existing literature does not explicitly discuss the effect of sexual orientation on job stability. Second, since full-time employees are the major component of the labor force, our regressions only include full-time observations. Furthermore, ending period observations without beginning period observations are also excluded because we use the corresponding beginning salary as a regressor. Before constructing the regression models, we test if interaction terms are needed for sexual orientation and gender factors. Therefore, two diagnostic models (A1 and A3) are run to test the null hypotheses that coefficients on all the interaction terms are jointly zero. Both tests have p-values greater than 5%, so no interaction terms are used for sexual orientation and gender factors. Another diagnostic model (A2) tests the null hypothesis that all the non-white races have the same coefficients. Dummy variables are set up for each non-white race and an F-test is run to determine whether all the coefficients are jointly equal to zero. As a result, the null hypothesis is not rejected. Hence, only nwhite dummy variable is used to account for racial differentials in our models. The last diagnostic model (A4) tests the effects of same-sex benefit, awareness of the sexual orientation, and boss awareness of the employee’s sexual orientation, on the duration of employment for the LGBT subsample. With fixed effects controlled, this diagnostic test provides no evidence that the three variables are jointly significant. Therefore, none of these three variables would be included in the LGBT subsample analysis. We apply entity fixed effects to this panel dataset. To test whether homoscedasticity can be assumed, the main regression (C1) is run assuming heteroscedasticity, and residual analysis is conducted in test C1T0. As shown in the residual plot, a clear pattern shows that the variance of the error increases as age increases. This relationship arises from the fact that the older the person is, the longer he or she is able to stay on each job. The result of this test affirms that heteroscedasticity should be assumed in our

Spring 2016 main regressions, and that robust standard errors should be used in hypothesis testing. Because gender, sexual orientation, and race are assumed to not change over time, their effects on employment tenure cannot be estimated in fixed effects regression. Therefore, pooled regressions are used for these three factors, for the ag-

it is more accurate to look at beginning salary and the salary growth instead of ending salary. gregate sample (B1), the LGBT subsample (B2), and the heterosexual subsample (B3), respectively. All other factors are analyzed in the fixed effects models, for the aggregate sample (C1) as well as the two subsamples (C2 and C3). Results Our analysis focuses on six models: three pooled regressions (B1, B2, B3) and three fixed effects regressions (C1, C2, C3). R-squared of all the six models are between 40% and 45%. Numbers of observations are 3444, 722, and 2722 for the aggregate sample, the LGBT subsample, and the heterosexual subsample respectively. We use the pooled regression on the aggregate sample (B1) to evaluate the effects of sex, race and sexual orientation on employment tenure. The significant and negative coefficient on nwhite indicates that non-white people are expected to have shorter employment tenure. This negative relationship between race and employment tenure is supported by Mumford and Smith (2003), who find that non-white workers tend to have shorter employment tenure. As for sex and sexual orientation, since neither of the coefficients is significant, we reach the conclusion that sex and sexual orientation have no effects on employment tenure. We interpret the rest of the coefficients through the aggregate fixed effects regression model (C1). The effect of salary on employment tenure is evaluated by the coefficients on lagsal and salinc in the main regression. Coefficient on lagsal is negative and with a p-value smaller than Columbia Economics Review

0.001. Thus, holding everything else the same, an individual with higher beginning salary is expected to have shorter employment tenure. Meanwhile, since the coefficient on salinc is not significant, we conclude that salary monthly growth has no effect on employment tenure. However, a partial F-test (C1T4) is used to explain the overall effect of salary on employment tenure, testing the null hypothesis that the coefficients on both lagsal and salinc are jointly equal to zero. The p-value smaller than 0.001 indicates that the overall effect of salary on individual’s employment tenure is significant. The effect of education on employment tenure is evaluated by coefficient on each of the five dummy variables, tier3, tier4, mba, lawdeg, and healthdeg. The significant negative coefficient estimates on tier3 and tier4 demonstrate that obtaining a Master’s or doctorate degree decreases employment tenure. Moreover, the significant positive coefficient on law degree shows that getting a law degree is expected to increase employment tenure. On the other hand, MBA degrees and health degrees do not exhibit significant effects on employment tenure. Furthermore, we use a partial F-test on the five education dummy variables (C1T2) to test the null hypothesis that these coefficients are jointly equal to zero. We reject the null and conclude that education has a significant effect on employment tenure. Similarly, the effect of professional degrees (MBA, law degree, health degree) on employment tenure is significant (C1T3).

Because gender, sexual orientation, and race are assumed to not change over time, their effects on employment tenure cannot be estimated in fixed effects regression. The effect of sector on employment tenure is evaluated by testing whether or not the coefficients on the three sector dummy variables secpub, secnp, and secself, are


Spring 2016

Non-white people are expected to have shorter employment tenure. jointly equal to zero (C1T5). The 0.62 pvalue shows that the overall effect of sector on individual’s employment tenure is not significant. Because neither the coefficient on child nor the one on numchild is significant, we evaluate these two coefficients jointly with an F-test (C1T4) and find that children do not have a significant effect on employment tenure. The effects of being fired, the age of individual, and the individual’s relationship status on employment tenure are measured by the coefficients on fired, ageact and ms2, respectively. The significant negative coefficient on fired shows that if the individual is fired on the job, this person is expected to work roughly six months less on one position. Since the coefficient on ageact is significant and equal to 4.233, we conclude that the expected job length is approximately four months longer for each year that age increases. This result agrees with a previous study where a positive relationship between age and employment tenure is found (Munford & Smith, 2004).

An individual with higher beginning salary is expected to have shorter employment tenure. However, the significant negative coefficient on ms2 shows that being in a committed relationship lowers individual employment tenure. This contradicts a previous finding that married individuals have greater job stability because marriage increases commitment that permeates into other aspects of life, including work (Ahituv & Lerman, 2011). Further research should be conducted to explain this contradiction. To further investigate differences for

heterosexual and non-heterosexual individuals, we run fixed effects regressions on the LGBT subsample (C2) and the heterosexual subsample (C3). The coefficients on the lagsal and ms2 are significant for heterosexual individuals, but are both insignificant for LGBT individuals. Based on a partial F-test with coefficients on the beginning salary and the salary growth, salary does not affect LGBT individuals’ employment tenure, but does affect that of heterosexual individuals (C2T1, C3T1). This result suggests that the LGBT and heterosexual groups may have different variables that determine their employment tenure. LGBT individuals may not be as concerned about salary once they find a comfortable working environment. We also perform a joint hypothesis test on coefficients of professional degrees and find that the impact of professional degrees is only significant for the LGBT group (C2T3, C3T3). These degrees allow

We [...] conclude that education has a significant effect on employment tenure. individuals to hold specialized positions, and discrimination due to sexual orientation is less likely when one has irreplaceable skills. To conclude, the subsample analyses reveal variations of employment tenure determination between two groups. Conclusion The goal of our study is to identify determinants of employment tenure. Based on our results, we conclude that the following variables are significant in determining employment tenure: salary, education, age at the time of activity, marital status, whether or not an individual is fired, and whether an individual is white. We recognize two limitations to our regression models. First, we only look at full-time employees, so our results are not generalizable to the entire labor force. The determinants of employment tenure for part-time employees probably appear very different, which warrants the focus on full-time employees. Also, marital status and employment tenure have a two-way causal relationship that we are Columbia Economics Review

37

This result suggests that the LGBT and heterosexual groups may have different variables that determine their employment tenure. not able to account for due to the lack of an appropriate instrumental variable. We could not identify instrumental variables within the given data that explain marital status without having an independent effect on employment tenure. This unresolved endogeneity issue may explain why we see an unexpected negative coefficient that contradicts previous academic literature. A final caveat is that we do not differentiate between observations of the current job and past jobs. The length of the current job underestimates employment tenure on this position, as we assume that the individual’s total tenure on the current job is months on the job at the time of the most recent survey. We have attempted to include a dummy variable that distinguishes current activity from past jobs, but there is an error of perfect multicollinearity in the fixed effects model and, the dummy variable is not significant in the pooled regression, so we have decided not to include the factor in our regression. Based on the magnitude and direction of the effect of these significant variables on employment tenure in the main regression (C1), we would like to offer some suggestions for business employment practices as well as several policy recommendations. First, we found that higher initial salaries reduce employment tenure because they might encourage employees to move to different jobs that offer more enticing salaries. As a result, we would recommend that in general, given the same budget constraint, businesses can improve employee retention by initially offering lower salaries with a commitment to salary growth. Offering lower initial salaries is not economically intuitive because businesses want to attract workers to their companies with higher initial salaries. However, if businesses find that the employees who obtain high-


Spring 2016

38 er initial wages do not stay at the job for a long period of time, a better way to retain workers and reduce turnover on the job may be to offer salaries that will increase over the span of the employee’s career at the company. By undertaking such an initiative, businesses may not necessarily lose extraordinary job applicants because they would be able to find out and appreciate the high salary growth the company offers. Second, older individuals stay on their jobs longer. This makes sense because older individuals in the workforce are probably more focused on finding a stable job whereas younger individuals are still exploring various career opportunities. Businesses hiring new employees should be aware that while younger applicants may be more attractive than older applicants, older employees tend to be more stable and committed. We also find that people have lower employment tenure when they have advanced degrees (Master’s or Ph.D.). The shorter length of their job tenure may have more to do with the nature of the jobs that they are able to obtain (e.g. adjunct professor position), rather than their personal decisions to stay on a job for a shorter period of time. This suggests that policy makers should make efforts to increase employment tenure for individuals who invest money and time into graduate degrees, because pursuing advanced degrees yields positive externalities.

conclusion is that overall retention rate has declined over time, but particularly for black males, high school dropouts, and those with minimal college education. It is worth applying this model to LGBT individuals in order to examine if recent movements for LGBT rights have had a tangible impact on the employment tenure for LGBT individuals, relative to changes over time for the heterosexual group. We hypothesize that relative to today, LGBT individuals had shorter employment tenure in the past when they faced greater discrimination. n References Ahituv, A., & Lerman, R. I. (2011). Job turnover, wage rates, and marital stability: How are they related? Review of Economics of the Household, 9(2), 221-249. doi:http://dx.doi.org/10.1007/s11150-0109101-6 Auer, P., Berg, J., & Coulibaly, I. (2005). Is a stable workforce good for productivity? International Labour Review, 144(3), 319-343. Retrieved from https://ezproxy. haverford.edu/login?url=http://search. proquest.com/docview/56569735?accoun tid=11321 Kambourov, G. & Manovskii, I. (2008). Rising occupational and industry mobil-

Policy makers should make efforts to increase employment tenure for individuals who invest money and time into graduate degrees. An interesting direction for future research may be to look at changes in employment tenure over time for LGBT and non-LGBT individuals. A study by Marcotte (1999) looks at whether or not job stability (as measured by employee retention rate) has declined over time. The Columbia Economics Review

ity in the United States: 1968-97. International Economic Review 49 (1), 41-79. Retrieved from http://www.jstor.org/stable/20486788 Kronenberg, K., & Carree, M. (2012). On the Move: Determinants of Job and Residential Mobility in Different Sectors. Urban Studies, 49(16), 3679-3698. doi:10.1177/0042098012448553 Leppel, K. (2009). Labor force status and sexual orientation. Economica 76, 197207. doi: 10.1111/j.1468-0335.2007.00676.x Marcotte, D. E. (1999). Has job stability declined? Evidence from the panel study of income dynamics. American Journal of Economics and Sociology, 58(2), 197-216. Retrieved from https://ezproxy. haverford.edu/login?url=http://search. proquest.com/docview/56859705?accoun tid=11321 Mumford, K; Smith, P. (2003, September). Determinants of Current Job Tenure: A Cross Country Comparison. Australian Journal of Labour Economics 6 (3), 435-451. Retrieved from https://www.researchgate.net/publication/46557594_Determinants_of_current_job_tenure_a_ cross_country_comparison United States, United States Department of Labor, Bureau of Labor Statistics. (n.d.). CPI Inflation Calculator. Retrieved from http://www.bls.gov/data/inflation_ calculator.htm


Spring 2016

39 Clear pattern in the plot. Conclusion: keep using robust standard error in all the C models -----------------------------------------------------------------------------MODEL C2: Fixed Effects Regression, LGBT Subsample: xtreg monsjob lagsal salinc sex tier3 tier4 mba lawdeg healthdeg ageact ms2 child numchil fired nwhite secpub secnp secself if even==1 & empstat==3 & nobeg==0 & lgbt==1, fe cl(id)

MODEL B1: Pooled Regression, Aggregate Sample: regress monsjob lagsal salinc sex tier3 tier4 mba lawdeg healthdeg ageact ms2 child numchil fired nwhite secpub secnp secself lgbt if even==1 & empstat==3 & nobeg==0, robust Linear regression

Number of obs = 3444 F( 18, 3425) = 42.74 Prob > F = 0.0000 R-squared = 0.4253 Root MSE = 23.401

Prob > F

= 0.0000

R-squared = 0.4210 Root MSE = 22.55

---------------------------------------------------------------------------| Robust monsjob | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lagsal | -1.988412 .9065205 -2.19 0.028 -3.765955 -.2108687 salinc | 63.69688 37.11249 1.72 0.086 -9.074826 136.4686 sex | -1.580402 .9311747 -1.70 0.090 -3.406289 .2454838 tier3 | -8.603257 1.685805 -5.10 0.000 -11.90885 -5.297661 tier4 | -14.17408 2.543518 -5.57 0.000 -19.16152 -9.186642 mba | 7.634148 3.893637 1.96 0.050 -.0006582 15.26895 lawdeg | 5.14087 2.866667 1.79 0.073 -.4802104 10.76195 healthdeg | 1.772943 2.583262 0.69 0.493 -3.292425 6.838311 ageact | 3.938519 .2442041 16.13 0.000 3.459673 4.417364 ms2 | -1.459209 .9845275 -1.48 0.138 -3.389712 .4712934 child | -3.090023 4.425083 -0.70 0.485 -11.76691 5.586863 numchil | 3.222372 2.967982 1.09 0.278 -2.597371 9.042115 fired | -7.201677 1.976899 -3.64 0.000 -11.07806 -3.325291 nwhite | -2.263592 1.251505 -1.81 0.071 -4.717596 .1904111 secpub | .9950308 1.362976 0.73 0.465 -1.677549 3.667611 secnp | -.0867119 .9630242 -0.09 0.928 -1.97505 1.801626 secself | -2.010758 3.446445 -0.58 0.560 -8.768691 4.747175 _cons | -54.26873 8.897133 -6.10 0.000 -71.7146 -36.82286 ------------------------------------------------------------------------------

Fixed-effects (within) regression Number of obs = 722 Group variable: id Number of groups = 344 R-sq: within = 0.4083 Obs per group: min = 1 between = 0.3655 avg = 2.1 overall = 0.3968 max = 6 F(15,343) = 31.57 corr(u_i, Xb) = -0.1407 Prob > F = 0.0000 (Std. Err. adjusted for 344 clusters in id) -----------------------------------------------------------------------------| Robust monsjob | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lagsal | -9.204841 4.836357 -1.90 0.058 -18.71749 .3078094 salinc | 51.251 81.42637 0.63 0.529 -108.9069 211.4089 sex | 0 (omitted) tier3 | -12.36788 5.068451 -2.44 0.015 -22.33704 -2.398725 tier4 | -28.90012 9.721531 -2.97 0.003 -48.02144 -9.778799 mba | -19.28447 11.43783 -1.69 0.093 -41.78158 3.21264 lawdeg | 20.06778 10.5718 1.90 0.059 -.7259351 40.8615 healthdeg | 6.229736 9.641171 0.65 0.519 -12.73353 25.193 ageact | 4.813409 .5971094 8.06 0.000 3.638952 5.987865 ms2 | -7.893003 4.118833 -1.92 0.056 -15.99435 .2083474 child | -21.77589 12.9722 -1.68 0.094 -47.29096 3.739175 numchil | 22.18168 10.7854 2.06 0.040 .9678385 43.39553 fired | -12.21367 3.206802 -3.81 0.000 -18.52114 -5.906197 nwhite | 0 (omitted) secpub | 10.27664 5.844737 1.76 0.080 -1.2194 21.77267 secnp | 6.239015 3.281971 1.90 0.058 -.2163068 12.69434 secself | -8.706594 17.81746 -0.49 0.625 -43.75184 26.33865 _cons | -2.708834 44.98707 -0.06 0.952 -91.1941 85.77643 -------------+---------------------------------------------------------------sigma_u | 25.782047 sigma_e | 24.693609 rho | .5215536 (fraction of variance due to u_i) ------------------------------------------------------------------------------

-----------------------------------------------------------------------------| Robust monsjob | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lagsal | -2.206487 .8406808 -2.62 0.009 -3.854774 -.5582006 salinc | 63.90569 32.22673 1.98 0.047 .7201309 127.0912 sex | -1.476625 .828531 -1.78 0.075 -3.10109 .14784 tier3 | -9.467071 1.597154 -5.93 0.000 -12.59854 -6.335599 tier4 | -14.53425 2.399576 -6.06 0.000 -19.239 -9.829508 mba | 7.423142 3.771837 1.97 0.049 .0278642 14.81842 lawdeg | 6.064428 2.648349 2.29 0.022 .8719245 11.25693 healthdeg | 1.063849 2.400124 0.44 0.658 -3.64197 5.769668 ageact | 4.049613 .2320362 17.45 0.000 3.59467 4.504557 ms2 | -1.333407 .9263713 -1.44 0.150 -3.149703 .4828894 child | -2.976277 4.268561 -0.70 0.486 -11.34546 5.392908 numchil | 3.108673 2.88556 1.08 0.281 -2.54892 8.766267 fired | -7.543428 1.772794 -4.26 0.000 -11.01927 -4.067588 nwhite | -2.324034 1.158038 -2.01 0.045 -4.594549 -.0535193 secpub | 2.401385 1.31335 1.83 0.068 -.1736437 4.976413 secnp | .7220079 .8624362 0.84 0.403 -.9689336 2.412949 MODEL C1: Fixed Effects Regression, Aggregate Sample: secself | -1.456732 3.230008 -0.45 0.652 -7.789669 4.876206 lgbt | .2974137 1.033824 0.29 0.774 -1.729561 2.324388 xtreg monsjob lagsal salinc sex tier3 tier4 mba lawdeg healthdeg _cons | -55.46921 7.99803 -6.94 0.000 -71.15061 -39.78782 ageact ms2 child numchil fired nwhite secpub secnp secself lgbt if -----------------------------------------------------------------------------MODEL C3: Fixed Effects Regression, Heterosexual Subsample: even==1 & empstat==3 & nobeg==0, fe cl(id) MODEL B2: Pooled Regression, LGBT Subsample: xtreg monsjob lagsal salinc sex tier3 tier4 mba lawdeg healthFixed-effects (within) regression Number of obs = 3444 deg ageact ms2 child numchil fired nwhite secpub secnp secself if Group variable: id Number of groups = 1656 regress monsjob lagsal salinc sex tier3 tier4 mba lawdeg healtheven==1 & empstat==3 & nobeg==0 & lgbt==0, fe cl(id) R-sq: within = 0.3550 Obs per group: min = 1 deg ageact ms2 child numchil fired nwhite secpub secnp secself if between = 0.4514 avg = 2.1 even==1 & empstat==3 & nobeg==0 & lgbt==1, robust Fixed-effects (within) regression Number of obs = 2722 overall = 0.4040 max = 8 Group variable: id Number of groups = 1312 F(15,1655) = 24.71 Linear regression Number of obs = 722 R-sq: within = 0.3486 Obs per group: min = 1 corr(u_i, Xb) = 0.0316 Prob > F = 0.0000 F( 17, 704) = 10.90 between = 0.4649 avg = 2.1 (Std. Err. adjusted for 1656 clusters in id) Prob > F = 0.0000 overall = 0.4014 max = 8 -----------------------------------------------------------------------------R-squared = 0.4482 F(15,1311) = 18.66 | Robust Root MSE = 26.396 corr(u_i, Xb) = 0.0640 Prob > F = 0.0000 monsjob | Coef. Std. Err. t P>|t| [95% Conf. Interval] (Std. Err. adjusted for 1312 clusters in id) -------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------lagsal | -6.859323 1.379983 -4.97 0.000 -9.566019 -4.152627 | Robust | Robust salinc | 35.15092 34.05616 1.03 0.302 -31.64677 101.9486 monsjob | Coef. Std. Err. t P>|t| [95% Conf. Interval] monsjob | Coef. Std. Err. t P>|t| [95% Conf. Interval] sex | 0 (omitted) -------------+----------------------------------------------------------------------------+---------------------------------------------------------------tier3 | -9.677065 2.417932 -4.00 0.000 -14.41959 -4.934536 lagsal | -3.21118 2.187064 -1.47 0.142 -7.505129 1.082769 lagsal | -6.554984 1.394476 -4.70 0.000 -9.290633 -3.819335 tier4 | -19.03403 4.299564 -4.43 0.000 -27.46719 -10.60087 salinc | 62.89552 63.51409 0.99 0.322 -61.80418 187.5952 salinc | 33.61127 37.54484 0.90 0.371 -40.04327 107.2658 mba | -.434767 4.770797 -0.09 0.927 -9.7922 8.922666 sex | -1.445963 1.81856 -0.80 0.427 -5.016413 2.124487 sex | 0 (omitted) lawdeg | 12.15338 4.255359 2.86 0.004 3.806931 20.49984 tier3 | -11.2995 3.957351 -2.86 0.004 -19.06913 -3.529881 tier3 | -8.610229 2.774259 -3.10 0.002 -14.0527 -3.167757 healthdeg | 8.731377 7.241873 1.21 0.228 -5.472821 22.93558 tier4 | -14.98684 6.292073 -2.38 0.017 -27.34031 -2.633362 tier4 | -16.78409 4.704708 -3.57 0.000 -26.01367 -7.554512 ageact | 4.233281 .3027462 13.98 0.000 3.639475 4.827087 mba | 4.071951 11.88617 0.34 0.732 -19.26464 27.40855 mba | 2.331671 5.147508 0.45 0.651 -7.766582 12.42992 ms2 | -5.042734 1.787851 -2.82 0.005 -8.549422 -1.536046 lawdeg | 11.70919 7.160169 1.64 0.102 -2.348649 25.76704 lawdeg | 10.42031 4.672523 2.23 0.026 1.253876 19.58675 child | -5.982225 4.561646 -1.31 0.190 -14.92943 2.96498 healthdeg | -3.274209 6.254764 -0.52 0.601 -15.55443 healthdeg | 8.421219 7.642039 1.10 0.271 -6.570742 23.41318 numchil | 3.713487 3.283149 1.13 0.258 -2.726077 10.15305 9.006016 ageact | 4.027553 .3346011 12.04 0.000 3.371141 4.683966 fired | -5.874296 2.048261 -2.87 0.004 -9.891752 -1.85684 ageact | 4.292487 .5200704 8.25 0.000 3.271412 5.313561 ms2 | -4.154558 1.928619 -2.15 0.031 -7.938076 -.3710408 nwhite | 0 (omitted) ms2 | -.4460464 2.21541 -0.20 0.840 -4.795647 3.903554 child | -5.556663 4.789184 -1.16 0.246 -14.95196 3.83864 secpub | 1.703309 1.94487 0.88 0.381 -2.111355 5.517973 child | .1201675 16.0181 0.01 0.994 -31.32881 31.56914 numchil | 2.953889 3.366616 0.88 0.380 -3.650655 9.558433 secnp | .681191 1.393953 0.49 0.625 -2.052906 3.415288 numchil | 4.196179 13.01676 0.32 0.747 -21.36014 29.75249 fired | -3.852041 2.399244 -1.61 0.109 -8.558819 .8547368 secself | -4.940851 4.852161 -1.02 0.309 -14.45787 4.576171 fired | -8.812837 4.003448 -2.20 0.028 -16.67297 -.9527091 nwhite | 0 (omitted) lgbt | 0 (omitted) nwhite | -3.299072 3.204621 -1.03 0.304 -9.59083 2.992686 secpub | -.326897 1.912353 -0.17 0.864 -4.078504 3.42471 _cons | -10.99313 13.49936 -0.81 0.416 -37.47076 15.4845 secpub | 8.013913 3.755298 2.13 0.033 .6409896 15.38684 secnp | -.6217477 1.522694 -0.41 0.683 -3.608931 2.365436 -------------+---------------------------------------------------------------secnp | 3.762101 1.967413 1.91 0.056 -.1005988 7.6248 secself | -4.337083 4.72451 -0.92 0.359 -13.60551 4.931343 sigma_u | 21.008953 secself | .93146 9.922837 0.09 0.925 -18.55044 20.41336 _cons | -8.731279 13.91629 -0.63 0.530 -36.0319 18.56934 sigma_e | 21.928148 _cons | -53.63507 19.24313 -2.79 0.005 -91.41587 -15.85427 -------------+---------------------------------------------------------------rho | .47860187 (fraction of variance due to u_i) sigma_u | 20.089917 -----------------------------------------------------------------------------MODEL B3: Pooled Regression, Heterosexual Subsample: sigma_e | 21.035606 rho | .47701698 (fraction of variance due to u_i) MODEL C1: Fixed Effects Regression, Aggregate Sample: regress monsjob lagsal salinc sex tier3 tier4 mba lawdeg health-----------------------------------------------------------------------------TEST C1T0: Residual Analysis deg ageact ms2 child numchil fired nwhite secpub secnp secself if even==1 & empstat==3 & nobeg==0 & lgbt==0, robust predict fitted generate resid=monsjob-fitted Linear regression Number of obs = 2722 graph twoway (scatter resid ageact) F( 17, 2704) = 35.47

Columbia Economics Review


40

Spring 2016

One Pay or Another The Wage Effects of Low-Skilled Immigration: A Panel Analysis Jonathan Kroah Columbia University The United States presidential election in 2016 has reignited a heated discussion about the effects of immigration on the incomes of American citizens. This fear of immigration’s role in reducing U.S. wages has accompanied the influx of low-skilled workers over the past several decades. To stymie this assumed wage decline, presidential candidates have offered an array of policy proposals from building a wall along the U.S.-Mexico border to immediately deporting all undocumented workers. There is a notable lack of consensus among economists on this issue, opening the door for demagogues and pundits to put their own theories before the public. In this paper, Jonathan Kroah uses regression analysis to explore the effects of immigration on U.S. wages with his own appropriation of the “area approach” (where labor markets are defined geographically). He ultimately asserts that low-skilled immigrants have an insignificant effect on U.S. wages. Kroah’s conclusion could have wide ranging implications for public policy; however, additional research is needed to identify the scale and nature of alternative factors (such as labor arbitrage) that may have contributed to the insignificant correlation observed. – J.M. Introduction Whether, and by how much, immigration tends to reduce wages and employment opportunities is a critical question for policymakers. In the United States, the question gains added significance in the context of the ongoing public policy debate over income inequality: are lessskilled workers disproportionately and negatively impacted by increased labor market competition from less-skilled immigrants? (Borjas, Freeman, and Katz, 1997) Moreover, if policymakers wish to ensure relatively higher welfare for immigrants seeking better labor market opportunities in the U.S., then one must ascertain whether the very entry of immigrants dissolves the relatively higher wages they seek in the first place. Economic theory does not predict a firm’s answer to these issues. On the one hand, immigration shifts the labor supply curve outward, pushing equilibrium wages downward; on the other, immigrants add to the demand for goods and

services wherever they locate, and thus may also boost the demand for labor (Altonji and Card, 1991). Moreover, as many researchers have pointed out, low-skilled immigrants are likely not perfect substitutes for low-skilled native workers due

Are lessskilled workers disproportionately and negatively impacted by increased labor market competition from less-skilled immigrants?

Columbia Economics Review

to their weak English language skills and the fact that their work experience from their home countries may differ qualitatively from experience acquired in the U.S. Consequently, an inflow of immigrants may not move the labor supply curve by as much as a comparable inflow of natives. Existing research provides a wide range of estimates of the net effects of immigration. Many researchers have found negative effects, some of which are relatively small: for instance, Altonji and Card (1991) find a 1.2 percent decline in the wages of less-skilled natives for each percentage point increase in the share of immigrants in the local population. Other estimates are larger: Borjas (2003), for instance, estimates that the 11 percent increase in the labor supply due to immigration during 1980-2000 reduced the wages of native high school dropouts and graduates by 8.9 percent and 2.6 percent, respectively. Still others find some positive effects: Ottaviano and Peri (2006), for instance, estimate that immigration in-


Spring 2016

investigating the labor market effects of immigration begs the question of how one should define the labor markets to be analyzed. flows to the U.S. between 1990 and 2004 increased the wages of native-born high school graduates by 1.3 to 2.4 percent. However, investigating the labor market effects of immigration begs the question of how one should define the labor markets to be analyzed. Existing studies disagree on whether to define labor markets geographically or by skill and experience levels: That is, should one assume that immigrants affect wages only in the vicinity of where they locate, or that immigrants affect the wages of workers with similar skills and experience nationally? Those who take the latter approach typically argue that if immigration lowers local wages, natives will “arbitrage” the resulting inter-area wage differences

by migrating to high-wage areas (or by avoiding low-wage areas) until the wage gaps disappear. By erasing the negative wage effects, this migratory response attenuates the estimated effects of immigration (Borjas, 2005).Those who take the former approach counter by hypothesizing that arbitrage—either by trade or migration between regions—is a long-run mechanism, and that immigration will likely generate short-run inter-area wage differences by migrating to high-wage areas (or by avoiding low-wage areas) until the wage gaps disappear. By erasing the negative wage effects, this migratory response attenuates the estimated effects of immigration. Those who take the former approach counter by hypothesizing that arbitrage—either by trade or migration between regions—is a long-run mechanism, and that immigration will likely generate short-run, inter-area wage differences. They also point to some evidence that the correlation between immigration and a host country’s internal migratory patterns is in fact quite low. In this paper, I build on the literature that defines labor markets geographically—that is, I assume that the entry of immigrants affects wages and employment in the vicinity of where they arrive and seek work (at least in the short run). With this assumption, I construct two tenyear panels of local labor markets, with one recording wages and immigration at the state level and the other at the level of smaller statistical areas. The panel struc-

41 ture allows me to control for unobserved heterogeneity between areas and over time, and the two geographic levels enable me to investigate the likelihood that migratory patterns bias the estimated effect of immigration: if migratory patterns attenuate the estimate, then the estimate should be larger in magnitude in larger geographic areas, since transportation costs likely increase with the geographic scale of the labor market definition, and thus the migration response is likely smaller. I then estimate fixed effects models for each geographic level, relating the wages of low-skilled workers to the fraction of low-skilled immigrants in the local labor force. Finally, to control for the possibility that immigrants locate where wages are higher, I instrument the share of low-skilled immigrants with its lag, exploiting network effects in immigration patterns.

if migratory patterns attenuate the estimate, then the estimate should be larger in magnitude in larger geographic areas At the level of statistical areas, I find a relatively small but significant negative relationship between low-skilled immigration with wages overall and with low-skilled immigrants’ wages, and a relatively small but significant positive relationship with the wages of low-skilled natives. After instrumenting, these relationships all lose their significance, but the coefficient on immigration remains negative and somewhat small: for each percentage point increase in the labor force share of low-skilled immigrants, wages overall fall by about 0.16 percent. These results are consistent with much of the previous “area approach” literature. At the state level, however, the results reverse: higher concentrations of lowskilled immigrants are significantly associated with higher wages for both natives and immigrants, and this effect increases after instrumenting (but retains significance only for native wages). However, while the instrument seems to be strongly relevant in the statistical area-level regressions, it seems weakly relevant in the

Columbia Economics Review


42 state-level regressions, casting doubt on the state-level IV estimates. At face value, however, the switch from negative effects in the smaller market definition to positive effects in the larger market definition seems to contradict the hypothesis that migration responses attenuate the effects estimated for smaller, local labor markets. Altogether, the evidence presented here sides with the hypothesis that, in the short run, higher concentrations of low-skilled immigrants in the local labor market have no significant effects on the wages of low-skilled workers. Review of the Literature Since the 1980s, a large body of literature aiming to quantify the wage effects of low- skilled immigration (or of immigration in general) has developed, with most researchers employing one of two approaches. In the first, the analyst treats immigrants and natives as separate factors in a production function, estimates their substitution elasticities, and then simulates the effect of some shock to the supply of immigration. For instance, Grossman (1982) simulates a 10 percent increase in the number of foreign-born workers in the U.S., finding that the wages of immigrants and natives decline by 2.3 and 1 percent, respectively. Borjas (1987) performs a similar analysis, disaggregating foreign-born workers by race

Spring 2016 and ethnicity, and finds much larger effects. For most immigrant groups, he finds own-wage elasticities close to unity, implying that a 10 percent increase in the size of the group lowers its own wage by 10 percent, while native groups tend to experience wage reductions of less than one percent in response to the same shocks.Greenwood et al. (1997) produce estimates that are small even in comparison to Grossman (1982)’s: they disaggregate workers by predicted earnings brackets, estimating that a 20 percent increase in the population of “unskilled” foreign-born workers lowers their own wage by about 0.7 percent and raises the wages of “unskilled” natives by about 0.3 percent. In the second major approach, the analyst exploits variations in wages and levels of immigration between labor markets to estimate the effect of immigration. However, a key dispute in the literature is whether one should define a labor market as a group of workers in a particular geographic area—say, a statistical area, or a state—who compete locally, or as a group of workers with similar levels of skill and experience who compete nationally. The latter camp argues that immigration measured at the local level is likely to be endogenous: if immigration lowers the wages in a particular local labor market, other workers may respond by migrating to higher-wage areas, thus

attenuating the estimated wage effects of immigration. Moreover, if immigrants gravitate towards areas with relatively high wages, then simultaneity bias will generate a spurious positive relationship between immigration and wages (Friedberg and Hunt, 1995). The former camp argues that such effects are likely to occur in the long run, and that interarea comparisons are thus useful so long as one analyzes a relatively short time frame. Additionally, one may address simultaneity by analyzing a “natural experiment”—that is, a sudden change in immigration patterns due to some exogenous event (e.g., a policy change, a natural disaster)—or by instrumenting one’s measure of immigration.

a key dispute in the literature is whether one should define a labor market as a group of workers in a particular geographic area—say, a statistical area, or a state—who compete locally, or as a group of workers with similar levels of skill and experience who compete nationally. Which approach one takes seems to matter a great deal for the size of one’s estimated effect. For instance, Card (1990)’s influential difference-in-differences analysis of the Mariel boatlift—a large influx of Cubans into Miami in 1980 due to a sudden policy change in Cuba— finds no significant effects on the wages of both Cuban and non-Cuban less-skilled workers in the city. However, Borjas (2015) modifies Card’s analysis, focusing on the effects on high school dropouts’ wages (Card (1990) defines “less-skilled” workers according to predicted wages), and finds a 10 to 30 percent decline in wages due to the boatlift. Kugler and Yuksel (2008) also select a natural experiment to mitigate the risk of simultaneity bias: they focus on Hurricane Mitch, which drove a large number of Central Ameri-

Columbia Economics Review


Spring 2016 cans northward into U.S. border states. Even after controlling for potential interarea migration, they find small and insignificant wage effects on both natives and Latin Americans. Where natural experiments are not available, other studies simply exploit variations in wages and immigration densities over time and between local labor markets; these yield similarly small estimated effects. For instance, Butcher and Card (1991) compare a sample of highimmigration cities and control cities, finding no correlation between the changes in the lowest decile of wages from 1979 to 1989 with the fractions of immigrants in each city’s population. Altonji and Card (1991) find significant, but small, negative wage effects on less-skilled natives: they construct a two year panel from the 1970 and 1980 Censuses, and their preferred first-differences estimates suggest that for each percentage point increase in the share of immigrants in the local labor market, the wages of less-skilled natives (i.e., high school dropouts) fall by about 1.2 percent. LaLonde and Topel (1991) limit their analysis to the effects of the local share of immigrants on the individual wages of other immigrants, reasoning that, since immigrants are likely to be most substitutable with other immigrants (and least substitutable with natives), one may interpret any significant effects as “upper bounds” for the effects on natives. After accounting for fixed effects at the level of statistical areas, they find negative and significant—but rather small—effects on wages, which decrease with the time immigrants spend in the U.S. Overall, the “area approach” studies report relatively small effects of immigration on wages. Those who do not find the area approach compelling instead assume that workers compete in a national labor market partitioned by levels of skill and experience. For instance, Borjas (2003) constructs a panel of skill-experience groups (as op-

if migration rapidly equalizes wages between areas, how does one explain persistent inter-area wage variation? posed to geographic areas), and estimates

that the inflow of immigrants between 1980 and 2000—which boosted the male labor supply by 11 percent—reduced the wages of native high school dropouts and graduates by 8.9 percent and 2.6 percent, respectively. However, there are several problems with the assumption that inter-area arbitrage occurs so rapidly as to render inter-area analyses useless. The first is a puzzle pointed out by Borjas (1994): if migration rapidly equalizes wages between areas, how does one explain persistent inter-area wage variation? If migration equilibrates wages between areas—especially those that are relatively close by (i.e., within the same state)—then one might expect to see series with means near zero, or perhaps downward trends following upward spikes as firms and workers move in response to greater Columbia Economics Review

43

cross-area variation in wages. However, for most states, the series seems to hover around 0.1. Thus, for whatever reason, it seems that wage variation between areas that are relatively close together persisted from 2005 to 2014. Second, Friedberg and Hunt (1995) point to Blanchard and Katz (1992)’s study of wage adjustments between states—which finds that, following employment shocks, wages may equilibrate across states as slowly as over a decade— as evidence that wage differentials brought about by shifts in immigration could persist for several years. Finally, some studies have attempted to gather evidence on the impact of immigration on native migration rates, which has produced a mixed array of evidence: for instance, Borjas (2005) presents evidence that immigration greatly increases out-migration in small areas and more


Spring 2016

44

Immigrants may also be attracted to areas with particular industry compositions weakly affects larger areas, implying that attenuation due to migration should be greater when analyzing smaller local labor markets (e.g., metropolitan statistical areas), and smaller when analyzing relatively larger markets (e.g., states, Census regions). However, Card and DiNardo (2000) and Card (2001) also examine the relationship between native and immigrant population growth, and find relatively small (or even positive) relationships. Methodology and Data Given the mixed evidence for the hypothesis that migratory responses render inter-area analyses uninformative, I follow those studies, which assume that immigrants shift the local supply and demand for labor, thus affecting wages in the vicinity of where they settle and work (at least in the short run). As in many of the aforementioned “area approach” studies, I attempt to identify the effect of low-skilled immigration by using variations between areas in the fraction of lowskilled immigrants in the labor force. Specifically, I estimate a fixed effects model: (1) wit =αi+δt+mit′β+zit′γ+uit where, in area i and year t, wit is the mean log hourly wage, taken over one of three groups: (1) all low-skilled workers, (2) low-skilled natives, or (3) low-skilled immigrants. The regressor of interest is mit, the fraction of low-skilled immigrants in the labor force. I control for various characteristics of the low-skilled labor force with zit, a vector of four control variables: (1) the mean age and (2) the mean years of education attained (both calculated for the low-skilled population in the labor force), and the fractions of (3) low-skilled blacks and (4) low-skilled Asians in the labor force. uit is an error term. αi represents area fixed effects, or factors which vary by area but not over time: for instance, different areas may be governed by different local labor and immigration laws or may have different attitudes or biases about immigrants—

either of which could be correlated with the relative number of immigrants in the area. Moreover, some areas or states may attract more immigrants simply because they are close to the immigrants’ home countries (e.g., U.S. border state, which have large numbers of Latin American immigrants); but clearly, geographic distance does not change over time. Immigrants may also be attracted to areas with particular industry compositions: Card (1990), for instance, reports that Miami’s economy had a larger share of industries employing low-skilled labor than his control cities. He conjectures that if Miami had an unusually high demand for lowskilled labor, then it could have more easily “absorbed” the large shock to the supply of low-skilled labor brought on by the Mariel boatlift, thus explaining why his estimated wage and employment effects were almost nil (Card, 1990). One can easily extend this logic to the larger areas to

Columbia Economics Review

be analyzed in this paper: if the industry makeup in the average area is stable in the short-run, and if areas with larger “lowskilled” industries tend to attract more immigrants (or vice versa), then failing to control for this local fixed effect will bias the estimated wage effect of immigration. Similarly, δt represents year fixed effects, or factors which change over time (but not across local labor markets) and which may be correlated with immigration. For instance, macroeconomic conditions and federal labor and immigration laws may influence immigration patterns. These conjectures all motivate the hypothesis that fixed effects is an appropriate model for the question at hand. To implement this analysis, I generate ten-year panels of local labor markets using the 1- Year Public Use Microdata Samples (PUMS) from the 2005-2014 American Community Surveys. The PUMS cover about 1 percent of the U.S.


Spring 2016

population, or about 3 million observations for each raw yearly sample (US Census Bureau, 2009). I restricted the data to persons in the labor force aged

16 to 64 who reported positive income for the last 12 months and who were not self-employed or working without pay for a family business, enrolled in school, Columbia Economics Review

45 or living in group quarters. The resulting sample had about 1 million observations for each year. Next, I classified workers as “immigrants” if they were marked as “foreign-born” and either a naturalized U.S. citizen or not a citizen. To avoid overlap in my calculations of certain racial and ethnic populations, I assigned mutually exclusive race and ethnicity categories to workers; in particular, since the PUMS have data on both Hispanic origin and race, I was able to classify workers as non-Latino whites, non- Latino blacks, non-Latino Asians, or some other race. Following Card (1990) and Greenwood et al. (1997), I define workers as “lowskilled” by estimating an earnings equation from the microdata. Here, the goal is to remove some of the noise from the actual reported earnings data by predicting a person’s wage based on the average wage effects of their demographic and labor market characteristics, thus enhancing the accuracy of the “low-skilled” classification. Following Card (1990), I regressed the logarithm of an individual’s wage on a female dummy, a set of race and ethnicity dummies, a “married” dummy, years of education attained, potential labor market experience and its square, and interaction terms between gender and experience (and experience squared), and between gender and education. Finally, all persons with predicted wages in the lowest quartile were labeled as “low-skilled.” As a slight adjustment, persons coded as “low-skilled” were recoded as not low-skilled if their actual reported hourly wage fell in the 95th percentile of actual hourly wages. Finally, persons with hourly wages less than $2.00 were dropped from the sample as extreme outliers. Finally, I generated two panel datasets, each with a different geographic definition of the local labor market. The first treats Public Use Microdata Areas (PUMAs)—the smallest identifiable level of geography in the PUMS—as labor markets. The panel tracks wages, immigration levels, and various controls for all 2,069 PUMAs in the U.S. over ten years, from 2005 to 2014; Table 1 presents summary statistics from this panel. For both panels, I computed mean log hourly wages for all low-skilled workers, and for native and immigrant low-skilled workers separately. As Friedberg and Hunt (1995) point out, only using wages averaged over all low-skilled workers as the independent variable would suffer from a “composition problem,” as we can see in Tables 1 and 2, immigrants may tend to earn less than natives, which means


46

that average wages in PUMAs (or states) with large immigrant populations will be lower than low- immigrant PUMAs (or states), thus creating a spurious negative correlation between immigration and our measure of wages. For the sake of thor-

Spring 2016

oughness, I estimate equation (1) using all three measures of wages. The second panel treats states (and Washington, D.C.) as labor markets, with the same variables tracked over the same period. Table 2 presents summary sta-

Columbia Economics Review

tistics from the latter panel. These two definitions of the labor market follow Borjas, Freeman and Katz (1996), Borjas (2005), and Kugler and Yuksel (2008), who estimate similar models at multiple geographic levels to control for the possibility that migration in response to immigration dampens the “first-order” wage-effects of immigration.40 These authors reason that if the cost of migrating increases with the size of the labor market—e.g., if it is less costly to move one town over than to a different state— then the attenuation due to migration should be lower for larger labor market definitions—in this case, the state definition—since immigration will not induce as much out- migration as it would in a small area—in this case, the PUMA definition. Thus, by estimating equation (1) with both the PUMA-level and state-level panels and examining how the coefficients of interest change, we can control for out-migration in response to immigration. Finally, it should be noted that for both panels, each value for each variable was computed using some number of microdata observations. The average number of observations used differs greatly across each variable, which raises the possibility that some variables contain large amounts of noise; Tables A1 and A2 report detailed summary information on this point. For example, at the PUMA level, the fraction of low-skilled Asians in a PUMA was, on average, calculated using a little over four observations. A final issue is the selection of an appropriate instrumental variable to mitigate simultaneity bias. One candidate is past population shares of low-skilled immigrants. Table 3 reports the top five states ranked by what share of the total U.S. immigrant population their immigrant populations represent. In 2005 and 2014, whether one looks at workers of all skill levels or low-skilled workers only, just five states account for about 60 percent of the entire U.S. immigrant population, with just two (California and Texas) accounting for about 40 percent. Moreover, the immigrant populations in all five states grew between 2005 and 2014, while the low-skilled immigrant populations grew in all but two (California and Illinois). For this same reason, Altonji and Card (1991) instrument the change in the local share of immigrants with the lag of the share of immigrants: if immigrants gravitate to high-immigrant areas, then past levels of immigration are plausibly relevant instruments.41 Moreover, if last year’s fraction of immigrants is uncor-


Spring 2016

related with the error term uit, then the instrument is exogenous. I follow this approach by instrumenting the fraction of low-skilled immigrants with its first lag. Results First, we examine the results from using the PUMA-level panel, which are reported in Tables A3 through A5. Pooled OLS results are reported in Table A3. Regardless of which independent variable we use—low-skilled native wages, lowskilled immigrant wages, or low-skilled wages for both types of workers pooled together—the results indicate positive and statistically significant effects on wages. For this regression and the ones following, we interpret the coefficients on the low-skilled immigrant share as the percent change in wages due to a 1 percentage point increase in the low-skilled immigrant share. Thus, for example, when the full set of controls is included, we find that a 1 percentage point increase in the fraction of low- skilled immigrants

tends to raise wages by about 0.6 percent. However, if the fraction of immigrants in a PUMA’s labor force is correlated with time- invariant characteristics of that PUMA, then the OLS estimates are

If the share of low-skilled immigrants increases by 1 percentage point, low-skilled immigrants’ wages fall by about 0.15 percent, while lowskilled natives’ wages increase by about 0.23 percent.

Columbia Economics Review

47 biased. This motivates the estimates presented in Table A4, where equation (1) is implemented in full. First, we notice that the choice of a fixed effects model seems to be a good one: the 2,069 PUMA-level fixed effects—as well as the year effects— are all jointly significant at the 5% level for every specification listed, and a Hausman test comparing fixed effects with random effects prefers the former (at the 5% level) for all three measures of wages. As the leftmost set of regressions show, the overall effect of low-skilled immigration on low-skilled wages seems to be negative and significant, but relatively small in magnitude: a 1 percentage point increase in the share of low- skilled immigrants is associated with a 0.09 percent decrease in low-skilled wages. However, when we disaggregate low-skilled wages by nativity status, this conclusion changes somewhat: if the share of low-skilled immigrants increases by 1 percentage point, low-skilled immigrants’ wages fall by about 0.15 percent, while low-skilled natives’ wages increase by about 0.23 percent. Finally, if immigrants are drawn to areas with relatively high wages, then it is necessary to instrument the lowskilled immigrant share in order to remove the spurious positive correlation between wages and immigration. Table A5 reports estimates for model (1), instrumenting the fraction of low-skilled immigrants in the area with its first lag, while Table A9 reports first-stage regression results. First, the instrument seems relevant across the board: in every specification, the first-stage F-statistic is above 20 in every specification—well above the benchmark value of 10. Second, our conclusions change quite a bit from Table A4: all three measures of low-skilled wages are insignificantly affected by low-skilled immigration, and the sign of the coefficient switches for native wages and immigrant wages between Tables A4 and A5. Because the effect on natives was positive in Table A4, this sign change is consistent with the hypothesis that failing to instrument biases the estimated effect upward; however, the same does not hold true for the effect on other immigrants’ wages, since the estimated effect became positive after instrumenting. Similarly, for all low-skilled workers’ wages, the effect remains negative, but roughly doubles in magnitude from -0.09 percent to -0.19 percent, again siding with the hypothesis that failing to instrument yields spuriously positive estimates. Next, we repeat the aforementioned regressions with variables computed at the state level; these results are reported


48 in Tables A6 through A8. Table A6 reports pooled OLS results, which exclude state fixed effects; as in the PUMA-level results, we find significant positive effects of low-skilled immigration on lowskilled wages, although the state-level estimates are much larger in magnitude: for instance, a 1 percentage point increase in the fraction of low-skilled immigrants living in the state is associated with a 3.24 percent increase in low-skilled wages overall. In Table A7, we implement equation (1) in full and report fixed effects estimates. The estimated effects remain positive and—with the exception of the effect on low-skilled immigrants’ wages—significant, but are much smaller in magnitude than the OLS estimates: for instance, the effect on all workers’ wages (+0.74 percent) is roughly one fifth of its OLS counterpart (+3.24 percent), and the effect on natives’ wages (+1.38 percent) is about one third of the OLS estimate (+3.72 percent). These changes, combined with the joint significance of the fixed effects for every specification listed, suggest that state-level fixed effects are correlated with the share of low-skilled immigrants in the state, and excluding them biases the estimated effect of immigration upward. Moreover, comparing fixed effects estimates at the PUMA and state levels, we may ask whether the hypothesis that workers migrate in response to immigration seems plausible. If they do, and if immigration has a negative effect on wages, then one would expect the estimated effect to be negative and larger in magnitude at the state level, assuming that transportation costs increase as the geographic size of the labor market increases. However, this is not what we see: at the PUMA level, the fixed effects estimates for all low-skilled workers and low-skilled immigrants are negative, but they become positive when estimated at the state level (although the effect on immigrants is imprecisely estimated and insignificant). Similarly, the effect on lowskilled natives is positive at the PUMA level and gains in magnitude at the state level. This pattern is not consistent with those found by Borjas (2005) and Borjas, Freeman, and Katz (1996), who find that the effect of immigration dips below zero and increases in magnitude as one expands the geographic scope of the market. Finally, in Table A9, we repeat the analysis in Table A5 at the state level, instrumenting the share of low-skilled immigrants in the state with its lag. However,

Spring 2016 these results are highly suspect: while the instrument in the PUMA-level analysis seemed strongly relevant, the same instrument at the state level seems weak, since the first-stage F-statistic is below 10 for every specification listed. While it would be of little use to draw conclusions from IV estimates with weak instruments, we may simply notice that these estimates lie between the rather large pooled OLS results and the much smaller un-instrumented fixed effects estimates

we find some evidence that is inconsistent with the hypothesis that inter-area wage differences are quickly arbitraged away by native migration (except for the effect on low-skilled immigrants, which rises above the OLS estimate to +3.99 percent after instrumenting). Discussion and Conclusion This paper follows and builds on the existing “area approach” literature in several ways. First, it extends the “area approach” literature that uses panel methods to identify the effects of immigration: for instance, Altonji and Card (1991) employ a first differences model to estimate the effects of immigration density on average wages for various demographic groups, while LaLonde and Topel (1991) include area fixed effects in their model, but regress individual-level wage data on SMSA-level population. However, these studies tend to use relatively short panels of two to three periods constructed from decennial Censuses, while my panels exploit annual ACS data to track local labor markets for a full decade. Moreover, they cover the entire United States, while many prior area analyses observe a much smaller sample of cities or statistical areas over time. These panels allow for controlling for time effects and local fixed effects, and the results underscore the importance of doing so: the fixed effects are jointly significant for every specification used, and failing to include them substantially affects the magnitude and sign of many estimates. Columbia Economics Review

Second, we find some evidence that is inconsistent with the hypothesis that inter-area wage differences are quickly arbitraged away by native migration. First, in Graph 2, we see little change in the within-state, between-PUMA variation in wages over a ten-year period, which suggests that wage differentials may persist for at least few years. Second, comparing the coefficients in Tables A4 and A7, we do not see a pattern consistent with the hypothesis that migration attenuates the estimated effect of immigration: moving up from the PUMA level to the state level, many estimates become positive, rather than remaining negative and increasing in magnitude as one might expect if attenuation is at play. Taken together, these two findings suggest that inter-area analyses are still useful, so long as one uses proper instruments to control for other forms of endogeneity (e.g., simultaneity between wages and immigrant shares). Third and finally, the findings presented here concur with some earlier results in the “area approach” literature: after instrumenting at the PUMA-level, I find that low-skilled immigration has insignificant effects on wages; before instrumenting, I find significant (but relatively small) negative effects on low-skilled immigrants’ wages and positive effects on low-skilled natives’ wages. However, the sign and magnitude changes between the instrumented and uninstrumented results suggest that failing to instrument does indeed bias the estimated effect upwards. At the state level, however, the results are much less clear. Before instrumenting, low-skilled immigration seems to have a modest upward effect on wages, on the order of 0.7 to 1.4 percent for each percentage point increase in the fraction of immigrants; after instrumenting, however, very little can be said, since the same instrument which seemed strongly relevant at the PUMA level (the first lag of the immigrant share) appears to be weak at the state level. Further research could address two key issues with this study. The first would be to identify one or more instrumental variables that are strongly relevant and exogenous at all geographic levels analyzed. While it is possible to examine the attenuation hypothesis by comparing the un-instrumented PUMA- and state-level fixed effects estimates, the same is not true for the instrumented results, since the state-level results likely suffer from a weak instrument. A second improvement would be to check the robustness of these results against different definitions of “low-skilled.” While workers in this


Spring 2016 study were classified as such on the basis of predicted earnings, how might the results change if we defined “low-skilled workers” as “high school dropouts” or “workers with less than a bachelor’s degree”? However, the essential finding of the PUMA-level analysis—which does not appear to suffer from weak instruments—is that variations in the share of low-skilled immigrants in the local labor force have no significant relationship with the wages of either low-skilled natives or other low-skilled immigrants. References Blanchard, Olivier Jean and Lawrence F. Katz. 1992. “Regional Evolutions.” Brookings Papers on Economic Activity 1992 (1). Brookings Institution Press: 1-61. Borjas, George. 2015. “The Wage Impact of the Marielitos: A Reappraisal.” NBER Working Paper No. 21588. ___. 2005. “Native Internal Migration and the Labor Market Impact of Immigration.” NBER Working Paper 11610. ___. 2003. “The Labor Demand Curve is Downward-Sloping: Reexamining the Impact of Immigration on the Labor Market.” The Quarterly Journal of Economics 118 (4): 1335- 1374. ___. 1987. “Immigrants, Minorities, and Labor Market Competition.” Industrial and Labor Relations Review 40 (3): 382392. Borjas, George J., Richard B. Freeman, Lawrence F. Katz. 1997. “How Much Do Immigration and Trade Affect Labor Market Outcomes?” Brookings Papers on Economic Activity 1997 (1). Brookings Institution Press: 1–67. ___. 1996. “Searching for the Effect of Immigration on the Labor Market.” The American Economic Review 86 (2): 246– 51. Butcher, Kristin F. and David Card. 1991. “Immigration and Wages: Evidence from the 1980’s.” The American Economic Review 81 (2). Card, David. 1990. “The Impact of the Mariel Boatlift on the Miami Labor Market”. Industrial and Labor Relations Review 43 (2): 245-57. Chiswick, Barry. 2003. “Jacob Mincer, Experience and the Distribution of Earnings.” IZA Discussion Paper No. 847. Friedburg, Rachel M. and Jennifer Hunt. 1995. “The Impact of Immigrants on Host Country Wages, Employment and Growth.” The Journal of Economic Perspectives 9 (2): 23-44. Greenwood, Michael J., Gary L. Hunt, and Ulrich Kohli. 1997. “The Factor-

Market Consequences of Unskilled Immigration to the United States.” Labour Economics 4: 1-28. Kugler, Adriana and Mutlu Yuksel. 2008. “Effects of Low-Skilled immigration on U.S. Natives: Evidence from Hurricane Mitch.” NBER Working Paper No. 14293. Ottaviano, Gianmarco I.P. and Giovanni Peri. 2006. “Rethinking the Effects of Immigration on Wages.” NBER Working Paper No. 12497. “Public Use Microdata Areas (PUMAs).” Missouri Census Data Center. January 19, 2015. Accessed December 22, 2015 Schaffer, M.E., 2010. xtivreg2: Stata module to perform extended IV/2SLS, GMM and AC/HAC, LIML and k-class regression for panel data models. U.S. Census Bureau. 2009. “A Compass for Understanding and Using American Community Survey Data: What PUMS Data Users Need to Know.” U.S. Government Printing Office, Washington, DC. U.S. Census Bureau. 2015. “PUMS Accuracy of the Data (2014).” American Community Survey: PUMS Technical Documentation. October 27, 2015. Accessed December 7, 2015. Appendix My datasets were generated using the 2005-2014 American Community Survey (ACS) Public Use Microdata Samples (PUMS). Before running any calculations, all data was restricted to persons aged 16 to 64 who are in the labor force and who reported positive earnings for the last 12 months, and who are not enrolled in school, living in group quarters, selfemployed, or working without pay for a family business or farm. I imposed these restrictions following generally after Borjas (2003) and LaLonde and Topel (1991). Calculating populations & subpopulation shares In general, population sizes and shares (e.g., number of low-skilled immigrants in the labor force, proportion of blacks in the labor force) were estimated by summing the person-level survey weights of all observations with the relevant characteristics, Weeks worked In 2005-2007, “weeks worked” is simply coded as 0 to 52 (representing the number of weeks worked in the last 12 months), while in 2008-2014, the same variable is coded on a scale of 1 to 6, with each value Columbia Economics Review

49 representing a range of weeks. Following Altonji & Card (1991), I imputed the midpoint of each range as the actual number of weeks worked. Hourly Wages The ACS PUMS has information on wages and income earned, weeks worked, and average number of hours worked per week for the 12 months prior to taking the survey. Thus, hourly wages were estimated by dividing wages and income by the product of hours worked and weeks worked. PUMA Boundaries The smallest identifiable geographic area in the PUMS data—and thus the smallest geographic labor market definition used in my analyses—is the Public Use Microdata Area (PUMA), which is redrawn every 10 years with the decennial Census. Thus, from 2005 to 2011, PUMA boundaries were based on the 2000 Census; from 2012 onward, PUMA boundaries were redrawn according to the 2010 Census. For the sake of introducing minimal uncertainty, I recoded later PUMAs to earlier PUMAs using a crosswalk file generated by the University of Missouri’s Census Data Center.45 The file contained 2010-PUMAs, corresponding 2000-PUMAs, and an “allocation factor” that measured “what portion of the 2010 PUMA’s [population] resides ([would have] resided) in the 2000 PUMA.” Thus, the PUMAs were harmonized by calculating the PUMA-level variables of interest for each “later” year (2012, 2013, or 2014), merging the resulting cross section of PUMAs onto the crosswalk, and multiplying all population estimates (e.g., labor force size, number of low-skilled immigrants) by the allocation factor, thus distributing the populations calculated for the 2010-PUMAs among the corresponding 2000-PUMAs. In cases where a single 2010-PUMA corresponded to several 2000-PUMAs, I assumed that the 2000-PUMAs had the same average wage, educational attainment, etc. as the single 2010-PUMA. In cases where multiple 2010-PUMAs corresponded to a single 2000-PUMA, I took the arithmetic means of the wage, educational attainment, etc. of the 2010-PUMAs and assigned them to the single 2000- PUMA. The result is a strongly balanced panel: in the final dataset, all 2,069 2000-PUMAs are each observed for all 10 years, yielding a total sample size of 20,069.


50

Spring 2016

ENVIRONMENTAL POLICY COMPETITION

Winners First Place Patrick Reed, Calvin Harrison, and Daniel del Bosque Yale University Second Place Ricardo Jaramillo, Charles Harper, Simon Schwartz Columbia University Third Place Olivia Cheng, Leon Chen, Dakota Pekerti Swarthmore College

GUIDELINES The Columbia Economics Review invites teams of 1 – 4 undergraduates to participate in its fifth annual Competitive Climate environmental policy competition. Cash prizes will be awarded to top winners. Moreover, winning presenttions will be recognized by The Earth Institute and will be featured in the Spring 2017 edition of the Columbia Economics Review and on the Columbia Economics Department website. Registration begins in late Fall 2016. Check www.columbiaeconreview.com/epc/ for updates and more specific submission guidelines.

Columbia Economics Review


Spring 2016

51

ENVIRONMENTAL POLICY COMPETITION

On August 3, 2015, President Obama proposed the Clean Power Plan (CPP), a policy intended to limit carbon dioxide emissions from coal-burning power plants and combat the trends of climate change. While some states complied with the program, designing plans to meet the carbon emission reduction goal, the governors of five states did not submit a plan of any kind. This year, our Environmental Policy Competition asked participants to play the part of the director of environmental protection for one of these states, and to draft a plan to comply with these new regulations while minimizing the economic costs and making the plan politically tenable. Our first place winners were Patrick Reed, Calvin Harrison, and Daniel del Bosque, students at Yale, who submitted a creative and multifaceted proposal for the state of Wisconsin. Their approach succeeded by not focusing on a single policy, but developing a wedged strategy that included four distinct approaches: renewable portfolio standards, grid improvements, coal efficiency, and demand-side energy efficiency. One of the advantages of using four different avenues to lower emissions is the ability to guard against the possibility that one approach fails. The sensitivity analysis that they perform at the end of their presentation showed that, should one wedge prove unsuccessful, the proposal will nevertheless succeed by lowering emissions through the other three wedges. In this manner, their plan does not enforce rigid guidelines, but hedges its downside risk by ensuring that there are multiple robust ways to reach the goal. This approach was also appealing because it made the plan more politically acceptable. They avoided the controversial policies that have been reduced to political hot potatoes, such as cap-and-trade, even though cap-and-trade is a more effective system at lowering emissions than the four individual policies taken on their own. In so doing, they acknowledge the political climate and make their plan not only environmentally feasible, but politically realistic. Likewise, even though their research showed that, of their four wedges, switching energy from coal to gas would provide the most dramatic decreases, they realized that they may not be able to implement it because Wisconsin Gov. Scott Walker openly opposes it. But by employing four methods, their plan can still lower emissions adequately without relying on the transition to gas at all. In several other areas of their proposal, they demonstrated a knack for maintaining political and economic viability. Their first wedge focuses on biogas, which is made from animal waste on farms, as an alternative form of energy—a source of energy that the citizens Wisconsin in specific may approve of and benefit from because of the state’s dominant farming industry. Some of their ideas to improve the grid efficiency, such as two-way transmission via step-up transformers, also displayed credibility because the technology has already been implemented and the infrastructure already exists in Wisconsin. Economically, they incorporate several channels through which financial support could be provided, beginning with their first wedge of renewable portfolio standards. They suggest federal grants to finance smart meters, and a revolving loan fund to incentivize coal plants to shift to gas. Overall, these considerations not only exhibit the real-world practicality of their plan, but also bolster the political achievability, given the economic criticisms that Gov. Walker and many conservatives often state. On February 9, 2016, the U.S. Supreme Court ruled that President Obama had overstepped his bounds, and the EPA was stripped of its authority to execute the CPP. With the Supreme Court now down one member and at an awkward standstill, the future of the CPP rests entirely on this election cycle. Should the left wrest control of Congress and yield enough public support, they could reinstate CPP either through new legislation or through the courts. In their submission, Reed, Harrison, and del Bosque contributed to the public concern for the CPP and demonstrated its feasibility. Lowering carbon emissions is now more pertinent than ever. As I write this report, Columbia students remain locked in Low Library in an effort to force Columbia University President Lee Bollinger to divest from the top 200 publicly-traded fossil fuel companies. By tomorrow, this sit-in will have lasted a week. Climate change is undeniably a problem that students care passionately about. We applaud this year’s submissions to the EPC for their contributions to the scholarship on this problem, and we look forward to more success in the EPC in the years to come. Eitan Neugut CC’16 | Editor-in-Chief

Columbia Economics Review


CER Journal & Content On li n e a t

columbiaeconreview.com Read, share, and discuss Keep up to date with web-exclusive content

Columbia Economics | Program for Economic Research Printed with generous support from the Columbia University Program for Economic Research


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.