Columbia Economics Review: Spring 2017

Page 1

Columbia Economics Review

Cash Me Online Between a BlackRock and a Hard Place Get Off My Intellectual Property My Pay or the Highway The Battle for Yield A Cure for the ACA

Vol. VIII No. II Spring 2017


2

Spring 2017

COLUMBIA ECONOMICS REVIEW PUBLICATION INFORMATION Columbia Economics Review (CER) aims to promote discourse and research at the intersection of economics, business, politics, and society by publishing a rigorous selection of student essays, opinions, and research papers. CER also holds the Columbia Economics Forum, a speaker series established to promote dialogue and encourage deeper insights into economic issues.

2016-2017 E D I T O R I A L B O A R D EDITOR-IN-CHIEF

Carol Shou

JOURNAL PUBLISHER

Ben Titlebaum MANAGING EDITOR

Manuel Fernando Perez DESIGN DIRECTOR

Jessica Lu

SENIOR EDITORS

STAFF EDITORS

Derek Li Jessica Bai Shambhavi Tiwari

Alex Whitman Dafne Murillo Douglas DeJong Michael Allen Chu Michael Crapotta Neel Puri Michelle Yan

LAYOUT EDITORS

Uma Gonchigar Jessica Lu

Rishi Shah Spencer Papay CONTRIBUTING ARTISTS

Sirena Khanna (Cover) Diane Kim Nicholas DiConstanzo Amanda Ba

ONLINE CONTRIBUTORS

EXECUTIVE EDITORS

Max Rosenberg Guillermo Carranza Jordan

WEB DIRECTOR

Frank Zhu

EXECUTIVE DIRECTOR

CEC DIRECTOR

Alan Lin

Pranav Balan

TREASURER

EPC DIRECTOR

James McCarthy

Zoey Chopra

WEB EDITOR

Kevin Jiang

Antonia Camille Leggett Gabriel Kilpatrick Robert Marchibroda Jr. Christine Sedlack Ignacio Ramirez Sr.

Francesco Grechi Mitchell Mikinski Mathieu Sabbagh Zain Dylan Sherriff Cesar Herrera Ruiz Andres Rovira

OPERATIONS

CEC MEMBERS

Jenna Karp Saurabh Goel

EPC MEMBERS

Katherine Mao Makenzie Nohr Chenjie Zhao

OUTREACH

Bryan Li Randy Zhong

A special thanks to the Columbia University Economics Department for their help in the publication of this issue.

Columbia Economics Review would like to thank its donors for their generous support of the publication.

We welcome your comments. To send a letter to the editor, please email: econreview@columbia.edu We reserve the right to edit and condense all letters.

Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License

Columbia Economics | Program for Economic Research

Printed with generous support from the Columbia University Program for Economic Research Columbia Economics Review


Spring 2017

TABLE OF CONTENTS

Cryptocurrency and Big Data 6

Cash Me Online

Bitcoin Volatility Predictions Using Big Data: An Attention and Sentiment Analysis Approach

Corporate Taxation 15

Between a BlackRock and a Hard Place

The Asymmetric Effects of Corporate Tax Changes on Employment

Property Rights 24

Get Off my Intellectual Property

Intellectual Property Rights and Foreign Direct Investment

Online Labor Markets 38

My Pay or the Highway

Payment Schemes in Online Marketplaces: How Do Freelancers Respond to Monetary Incentives?

Real Estate Finance 50

The Battle for Yield

Macroprudential Policy and Non-bank Finance: Implications for Commercial Real Estate Credit

Health Care Policy 60

A Cure for the ACA

An Online Piece Examining the ACA from a Non-Partisan Lens

Competition Winners 62

EPC and CEC Winners

For a complete list of papers cited by our authors and a full version of all editorials, please visit our website at columbiaeconreview.com

Opinions expressed herein do not necessarily reflect the views of Columbia University or Columbia Economics Review, its staff, sponsors, or affiliates.

Columbia Economics Review

3


4

Spring 2017

COLUMBIA ECONOMICS REVIEW

Call for Submissions Columbia Economics Review is interested in your article proposals, senior theses, seminar papers, editorials, art and photography.

GUIDELINES CER is currently accepting pitches for its upcoming issue. You are encouraged to submit your article proposals, academic scholarship, senior seminar papers, editorials, art and photography broadly relating to the field of economics. You may submit multiple pitches. Your pitch or complete article should include the following information: 1. Name, school, year, and contact information (email and phone number) 2. If you are submitting a pitch, please state your argument clearly and expand upon it in one brief paragraph. List sources and professors/industry professionals you intend to contact to support your argument. Note that a full source list is not required. 3. If you are submitting a completed paper, please make sure the file is accessible through MS Word. Pitches will be accepted throughout the fall and are due on a rolling basis. Send all pitches to econreview@columbia.edu with the subject “CER Pitch- Last Name, First Name.� If you have any questions regarding this process, please do not hesitate to e-mail us at econreview@columbia.edu. We look forward to reading your submissions!

Columbia Economics Review


Spring 2017

A LETTER FROM THE EDITORS Dear readers, If there is a common thread relating the articles that we have included in our Spring 2017 issue, it is the adaptation to challenges of the modern economy. We live in a profoundly different world than that of twenty, ten or even five years ago. We can now easily exchange massive amounts of information with people thousands of miles away with just the tap of a screen. Transactions happen in fractions of milliseconds; millions of dollars are made and lost in the blink of an eye. Living in a more integrated and fast-paced environment has meant a host of new opportunities for most of us, but it has also created a generally more uncertain and volatile world. The economists of the future will need to be able to adapt to these circumstances, abandoning the existing dogmas and quickly building new models and frameworks to respond to the challenges that this new world poses. The internet has been one of the enablers of many of these changes. We now buy, work and even date through the web. Markets that had traditionally remained unchanged for decades suddenly found themselves deeply transformed. This process of disruptive innovation, as coined by Clayton Christensen, has led to unforeseen consequences. The rise of the gig economy, for example, seems a likely determinant of the future structures of labor markets. This phenomenon is studied by Justine Moore in a paper we publish here under the name “Payment Schemes in Online Marketplaces” (38). Moore analyzes the responses of freelancers in an online labor market to different monetary incentives. Understanding their effect on labor outcomes will lead to better contract designs, as well as higher efficiencies from the unique features of online platforms. We commend the author for her experimental design and innovative work. Another way in which the modern world markets have changed is payments in themselves. The rise of cryptocurrencies is a still relatively unexplored field, in which Columbia alumnus Linan Qiu’s piece, “Bitcoin Volatility Predictions using Big Data” (6), serves as a timely contribution. His multidisciplinary approach and thought-provoking results have earned him a distinguished place in this feature. The impact of the creative process, however, is not limited to businesses and markets. Innovation and technological progress is known to be a key factor in the development of most economies. The question of how to protect these innovations and the implications on property rights is then a most relevant one for the current and future generations. Vincent Ramos contributed to this issue with his paper on “Intellectual Property Rights and Foreign Direct Investment” (24), which we are also proud to feature as this journal’s crucial international addition. The remaining two papers deal with more traditionally studied areas of economics. However, their relevance to understanding today’s world cannot be overstated. Brendan Moore is the author of “The Asymmetric Effects of Corporate Tax Changes on Employment” (15), which we are sure has valuable insight into the effects of taxation policies and positively adds to an already rich literature. Finally, Aidan Thornton’s paper on “Macroprudential Policy and Non-Bank Finance” (50) is particularly relevant within the current political and economic transition environment. Managing risk has become essential, particularly after the 2008 Financial Crisis. The policies of the future will be decided on papers like these, and as such their real-world implications should concern us all. Finally, there are still a couple of reasons to be cautious about the present and future of economics and the world it studies. Political visions and agendas have shifted dramatically during the past few years too. The view that policy decisions should be founded on the sound and rigorous analysis of facts continues to be challenged by opponents from all sides. This presents a threat, not only to academia, but more generally to the sensible and intelligent discourse that has enabled the great achievements of the modern world. The economists of the future will need to stand their ground and adhere to solid principles and values if they are to make a real impact. It is our strong belief at the Columbia Economics Review that by continuing to promote and feature undergraduate research, we are ultimately allowing more students and economists to engage with the world in a well-thought and critical manner. All the best, Manuel Perez SEAS ’18 | Managing Editor Carol Shou CC ’17 | Editor-in-Chief Ben Titlebaum CC’19 | Publisher

Columbia Economics Review

5


Spring 2017

6

Cash Me Online Bitcoin Volatility Predictions Using Big Data: An Attention and Sentiment Analysis Approach Linan Qiu Columbia University Bitcoin has given the world of economics an exciting social and financial experiment that’s only beggining to be explored. One of the most interesting aspects of this cryptocurrency is it’s highly speculative nature. We are excited to include Linan Qiu’s paper because of its interdisciplinary exploration of the relationship between social media activity and price movements and volatility. It’s use of models from computer science makes this a unique piece among those that the Columbia Economics Review has published in the past, and the way it uses them in tandem with standard economic theory makes for a very interesting read. -D.D. Bitcoin is a peer-to-peer electronic crypto-currency. These Bitcoins can be traded on their own for goods and services, or exchanged for a variety of “real” currencies on exchange markets. The prices of Bitcoins on these exchanges are volatile – recent prices have fluctuated around $350 (1 year high-low of $934-294), meaning that the Bitcoin monetary base is currently over four billion dollars. In this paper, I address two main topics. First, I attempt to map the “chatter” data available on the internet on Bitcoin users and usage by constructing a large dataset from several prominent sources of Bitcoin chatter. Given that Bitcoin still has a rather technically challenging adoption learning curve, and after conversations with Bitcoin enthusiasts and traders, I mined data from Reddit and Bitcointalk forum. Then, using daily Bitcoin market data, I found significant relationships between past social media chatter and future volatility. Specifically, I found that past attention (measured by 10 day trailing moving average) correlates significantly and positively with trading activity (measured by Bitcoin exchange trade volume) and future volatility (measured by 10 day forward standard deviation). Furthermore, changes in sentiment correlates positively with future price changes. This was done

by extracting the top N intra-day changes in sentiments, and looking at the mean of the corresponding contemporaneous and forward price changes. Introduction Bitcoin is a peer-to-peer electronic crypto-currency that is the implementation of a paper posted on the internet in 2008 by an user under the name of Satoshi Nakamoto. It is inspired by some digital currency precursors such as Hashcash and ecash. However, its main breakthroughs lies in several factors: • Peer-to-peer and Decentralized: In order to use Bitcoin, one must download the Bitcoin software (or use a web service). This connects the user to all other Bitcoin users. This forms the Bitcoin network. The currency is decentralized, meaning that no single user controls the flow of the currency. There is no central authority or middleman. Instead, users agree by consensus on every transaction that happens. This prevents people from duplicating coins (since an overwhelming majority of the network will have a different “receipt” from yours). • Wallets: Bitcoins are stored in wallets which are simply data files of numbers on a user’s computer. However, one cannot Columbia Economics Review

simply add more coins to his own wallet without receiving them from someone else (ie. forging Bitcoins). This is because behind the scenes, the Bitcoin network is sharing a public transaction record (the ledger) called the blockchain. • Blockchains: This ledger contains every transaction ever processed, allowing a user’s computer to verify the validity of each transaction. The authenticity of each transaction is protected by cryptographic techniques (hence the term cryptocurrency). Most interestingly, the existence of this ledger means that every single transaction in the Bitcoin universe is recorded. Hence, there exists a wealth of data for Bitcoin movements. By tracking a wallet address in the ledger, one can find out how many Bitcoins are sent to and from that address. These transactions are then processed using specialized hardware. Processors of these transactions earn Bitcoins in reward for their service in verifying transactions. This process is called “mining”. • Mining: With Bitcoin, miners can use special software to solve math problems and are issued a certain number of bitcoins in exchange. Successful miners are rewarded with newly created bitcoins and any transaction fees. The economics of mining is an interesting topic on


Spring 2017 its own, and is beyond the scope of this paper. • Money Supply: Successful miners are rewarded with newly created bitcoins. At the time of writing, this reward amounts to 25 bitcoins. The bitcoin protocol specifies that the reward for adding a block will be halved approximately every four years. Eventually, the reward will be removed entirely when an arbitrary limit of 21 million bitcoins is reached at around 2140. With the popularization of Bitcoin post2012, several other factors have made Bitcoin unique amongst its peers. • Acceptance by Merchants: Firms accepting Bitcoin includes Dell, Atomic Mall, Expedia, Virgin, Microsoft, OKCupid, PayPal. • Exchanges: Bitcoins can be bought and sold with many different currencies from individuals and companies. They may be purchased in person or via online exchanges. For online trading, most trading volume is concentrated in the largest exchanges such as Bitstamp and BTC-e. One can visualize an exchange as a facade for a single Bitcoin wallet that many users trade with. Due to the existence of the Bitcoin blockchain, one can track exchanges’ wallet addresses and find out the trading volume of each exchange. Many websites collate data from the top exchanges, making the existence of this data widely available for studies such as this thesis. Financial interest in Bitcoins was insignificant before January 2013. Following the publication of Satoshi’s paper in 2008, the first recorded transaction on the Bitcoin blockchain took place in January 2009. However, it took another ten months, until October 2009, for the price of a Bitcoin to emerge. This rate, valuing a Bitcoin at 0.076 cents, is generated using an equation that accounts for the cost of the electricity to run the computer that generated the Bitcoins. Only in February 2010 was an exchange born. Until 2013, Bitcoin was a hobbyist novelty with little attention from investors, remaining as a technology experiment with a strong libertarian ideology. During this period, Bitcoin steadily and slowly crept in prices to around $10 in Jan 2013 without major fluctuations. Financial attention on Bitcoin increased in 2013 when hedge funds started to invest in the currency. Furthermore, news about Bitcoin’s anonymity and use as a means to transfer money internationally spread to China. Investors backed

7

Bitcoin exchanges rapidly. This caused the surge in Bitcoin prices to the $1000s in late 2013. However, announcement of China’s ban on yuan-denominated Bitcoin exchanges caused a slide that was just as rapid. Meanwhile, coverage of the currency increased from internet subculture forums to traditional news agencies. This was followed by large fluctuations in Bitcoin’s prices in 2013. The volatility persisted into 2014, where the 1 year high-low of Bitcoin prices was $934-$294. The large amount of volatility further attracts speculative investment in Bitcoin.

rencies), Bitcoin’s movements are purely speculative. • Open Access to Speculative Chatter: There has been research into social media as a predictor for stock markets. In particular, Twitter is a popular platform for analysis. However, in “real” financial markets, much of the chatter happens in secured communication platforms such as Bloomberg messengers. This means that much of social media chatter reflects the opinions of retail investors. Opinions of proprietary and day traders are unlikely to be represented. Unlike the “real”

Bitcoin is interesting to this study primarily for 3 reasons: its use as a speculative asset, the open nature of Bitcoin’s chatter streams, and the suitability of the chatter for text mining. • Bitcoin as a Speculative Asset: Bitcoin’s high volatility is attractive to speculators. While the largest percentage jumps in prices happened in 2011, those changes were against a small market price and illiquid environment. The trend in volatility continued well into 2014, presenting potentials for big returns to investors. Hence, for risk-loving investors, high Bitcoin volatility is a gift. Furthermore, Bitcoin lacks intrinsic value. Only a select group of merchants accept Bitcoin as payment. While Bitcoin is legal in some countries, Bitcoin is legal tender in none. This means that aside from the transactional value of Bitcoin (which remains pegged to its market price since merchants usually instantly cash out Bitcoins and apply a floating exchange rate to the products’ actual prices in local cur-

financial markets, much of speculative chatter in the Bitcoin universe happens on sub-culture forums and chat rooms. In particular, from anecdotal evidence of Bitcoin traders and cryptocurrency enthusiasts, Reddit and Bitcointalk are the top frequented locations for Bitcoin speculators. Perhaps owing to the libertarian slant of the currency’s founding principles, these forums are open to anyone. And while it is not trivial to scrape and analyze this data, it is still possible to access it. • Purity of Speculative Chatter: Furthermore, from a natural language analysis perspective, Bitcoin is a much easier target for analysis than equities, bonds, or global-macro investments due to the lack of ambiguity in keywords. For example, when one parses the word “Apple” in an attempt to analyze Apple Inc., one will have a hard time disentangling the fruit apple and the company. Though possible, one will still have to disambiguate comments on the company’s products

Columbia Economics Review


Spring 2017

8

versus comments on the company’s financial desirability. This is not a problem for Bitcoins, since the word is never used outside its meaning as a financial asset. This makes natural language analysis a lot easier.

malization (converting raw text, including words, phrases, or sentences, into uniform length feature vectors) and machine learning (classifying text into sentiments

Existing Literature This research topic can be broken down into 2 distinct areas: • Dataset and Natural Language Analysis: Analyzing social media streams for attention and sentiments is a wellresearched subject in computer science. Particularly, sentiment analysis is a subfield of natural language processing (NLP), a fast moving field in computer science. Sentiment analysis combines several research areas such as text norColumbia Economics Review

via various statistical techniques such as neural networks or support vector machines). Authors in the computer science space have used Twitter data to predict a swine flu pandemic and movie ratings and revenues. Specifically, the latter interests us since the authors correlated social media attention against popularity, first weekend box-office revenues, Hollywood Stock Exchange prices, and revenues for all movies for a given weekend. They conducted sentiment analysis by accounting for subjectivity and polarity. This provides us with a model to adapt for Bitcoins. • Financial Implications of Bitcoins: There is an abundance of literature devoted to Bitcoins after 2012 when Bitcoins became more mainstream. One paper listed three areas: money demand and supply fundamentals, attractiveness to investors, and exogenous economic variables. In considering attractiveness to investors, the authors used variables such as daily volume of Bitcoin views on Wikipedia, new members on Bitcointalk and new posts on Bitcointalk. They found statistically significant correlations for all three types of variables. However, I found the proxies for attractiveness to investors to be unsatisfactory, since Wikipedia data is unlikely to be as up to date as forum and chatter data, and is not reflective of the sub-culture slant of the Bitcoin universe. Furthermore, without analyzing the messages directly, little can be drawn as to the direction of the attention. This paper aims to improve on the source selection and analysis. Beyond this, most economic literature on Bitcoin has either focused on the regulatory aspects of cryptocurrency, or on the gametheoretic aspects of individual miner decisions, both subjects which are beyond the scope of this paper.


Spring 2017 Dataset After consulting Bitcoin enthusiasts and speculators, two sources were selected for being the most representative and influential of financial chatter in the Bitcoin universe – Reddit and Bitcointalk. Reddit (http://www.reddit.com/) is an entertainment, social networking, and news site where registered users can submit content such as text posts or direct links. It is responsible for much of the internet subculture, and according to Bitcoin enthusiasts, Reddit is a place where active discussions on bitcoin news and events take place, and new cryptocurrencies are announced. For Reddit, I scraped the following subreddits that I deemed were most relevant to Bitcoins: • /r/bitcoin: general discussions on bitcoins • /r/bitcoinmining: discussions on bitcoin mining • /r/silkroad: discussion on the silk road As shown in figure 3.3 there is significant variation in Reddit messages. Furthermore, the data seems to be dominated by messages from the /r/bitcoin subreddit, a subreddit that is presently more frequented than the other subreddits. Bitcointalk (https://bitcointalk.org/) is the main Bitcoin discussion forum which includes subforums for technical support, mining, development, and economics. It is the dominant discussion board in the Bitcoin community since it was originally official and created by Satoshi himself. Bitcointalk has a typical forum structure consisting of forum boards, each containing posts. The boards considered are: • Trading Discussions: Discussions on Bitcoin trading • Economics: Economics of Bitcoin • Currency Exchange: Exchange activity on Bitcoins • Speculation: Speculative activity • Legal: Bitcoin legal discussion • Securities: Topics about individual Bitcoin bonds, stocks • Mining Speculation: Speculation regarding mining profits Each of the sources was mined using web scraping techniques, where standard requests were made to a website simulating those of a typical user. The content returned was then parsed to extract the relevant portions (message content, author name, time of posting, and source of message). Efficiently parallelizing the scraping process is key, since the

9

time taken to make an internet request (usually in the order of hundreds of milliseconds) is much longer than the time parsing the text (accomplished in a few milliseconds). Hence, it is essential to do this asynchronously (start a request while processing some other returned text) instead of linearly. This entire pro-

Aside from the intuition that higher attention to an asset generates higher volatility and trading, the importance of attention is built into Bitcoin’s algorithm and is absolutely essential to its survival. The peer-to-peer algorithm Bitcoin uses for general consensus means that the more people there are hooked onto the

cess was done in node.js. Reddit has an open Application Program Interface (API) that acts as an official way to request formatted data from Reddit. Hence, there is no need to code a bot to traverse the page manually in search of message content, author names etc. The data returned can be directly stored into a JSON file for fast retrieval. With Bitcointalk there is no open API, necessitating the coding of bots that scraped data off HTML code. The site also actively prevents users from hitting the site with too many requests simultaneously. Hence, measures have to be taken to circumvent rate limits. The scraping process took a week. In total, I scraped 1,836,666 messages belonging to various sources. The dataset was around 10GB, with a concatenated, cleaned table version sizing up to 500MB. Due to the difficulty of mining Bitcointalk data, Bitcointalk data is truncated to the start and end date of Reddit data, arriving at an overall time frame of 1 Jan 2014 to 1 Oct 2014. The data is distributed unevenly, with some subsources taking up a significantly larger share of the data. This should be a source of concern, since I do want to consider as many different sources as possible and a simple bin-counting approach will not sufficiently represent this. We deal with this problem via Principal Component Analysis, which will be elaborated on in the next chapter.

Bitcoin network, the more stable the algorithm is. Furthermore, when the last user of Bitcoin signs off the network, consensus is lost and Bitcoin essentially dies. In fact, Bitcoin was never the only cryptocurrency – there are a few other major cryptocurrencies and many more minor cryptocurrencies based on slight modifications of the original Bitcoin code. They usually die out within a few weeks due to lack of adoption. Hence, attention is an important factor of analysis for Bitcoins. In this thesis, I created two time series for attention: • Attention (All): Number of messages per day • Attention (Unique): Number of authors posting per day (so multiple messages by the same author counts as 1 message) A naive vertical summation of daily messages (bin-counting) does not reflect multiple sources of information since some sources will dominate, so the approach will have to be modified a little. I used Principal Component Analysis (PCA) to reduce the dimensionality of the data. The first principal component for both time series (Attention (All) and Attention (Unique)) accounted for around half of the proportion of explained variance. Hence, the scores of the first component count as a suitable “index” for the various sources of attention. Finally, there is a strong intuition for including weekday dummies in the attention time series. The number of users logging in is significantly different on weekends as opposed to weekdays, and

Attention Model

Columbia Economics Review


10

a quick observation of the time series can reveal such cyclical variations. Hence, a simple regression (with 0 intercept) is executed to remove the “weekday effects” as such:

Spring 2017

The residual of the regression is then extracted as the “filtered” time series. The results of this regression show that most weekdays have a significant effect on the variation. Indeed, on Saturdays and Sundays, attention tends to increase. However, the region before September 2013 is still noisy, since the average number of messages per day did not exceed

Columbia Economics Review

100. Furthermore, the variation in prices is not significant enough for analysis. Hence, the range of analysis is shortened to September 2013 to October 2014. There is a significant effect on weekends. Furthermore, on these days, the coefficient is negative (the loadings for the first component is negative, so this should be interpreted as an increase in attention during those days). To test the hypothesis that past attention affects future price volatility, I created two additional measures: • Past attention (P At): measured by 10 day trailing moving average of attention time series • Future Volatility (F SD): measured by 10 day forward standard deviation of market price I find that there is a significant and positive relationship between past attention and future volatility of market price. This corroborates anecdotal evidence that attention is correlated with volatility. Given that past attention is correlated with future volatility, one can expect speculative trading volume to increase. Since most speculative trading in the Bitcoin economy is done via exchanges, we should expect attention to correlate with exchange trade volume (fueling the market price volatility). Given that I have a measure of past attention, and that each exchange has a known address, I can test this hypothesis as well. I find that past attention correlates positively and significantly with exchange trade volume as well. This result is intuitively expected since past attention is expected to correlate with future trading activity. However, attention lacks directionality


Spring 2017

11

– simply counting the number of messages does not give indication of positivity of messages. This limitation prevents me from drawing conclusions for the direction of market price movements. Hence, I could only establish a relationship between market activity and attention. This is further shown by the fact that regressions of attention against market price and returns yielded inconclusive results. Hence, there is a need to move from attention to sentiment. Sentiment Model I used machine learning techniques to extract the sentiments from the messages. Essentially, models were trained to map a message (represented by a vector of dimension N). Distilling a message from plain text to the N vector is text normalization, and finding a function f that fits the data x for the data provided such that y = f(x) is the process of classification. One can arrive at naive estimates for x such as assigning each word in the vocabulary (of size N) of all words in the data a position in the N sized vector. Then, xai = 1if the word corresponding to a appears in the sentence for xi. For example, if there are two sentences: “Linan feels awesome today” and “Bitcoins are awesome”, there are 6 distinct words in this corpus: Linan, feels, awesome, today, Bitcoins, are. Then, each vector could be constructed as (1, 1, 1, 1, 0, 0) and (0, 0, 1, 0, 1, 1). However, this has a problem, since we are dealing with 1.8 million messages, and N is likely to be on the order of hundreds of thousands. Manipulating 1, 800, 000 * 100, 000 lines is not computationally easy, and this is known as the problem of dimensionality. Furthermore, we entirely ignored the position of words in the sentence. Hence features like negations will be lost. Instead, I apply the more sophisticated: • Recurrent Neural Network Language Model (RNNLM) • Distributed representation to obtain document vectors, then using linear Support Vector Machines (SVM) for classification While these models are more sophisticated, they are not more complex linguistically. These are statistical models for languages that attempt to learn how languages are used by consuming large amounts of training data instead of deterministically modeling certain quirks

Columbia Economics Review


12

Spring 2017

of the language. In fact, Frederick Jelinek, a pioneer of the statistical approach for automatic speech recognition, once said, “Every time I fire a linguist out of my group, the accuracy goes up.” A language model (LM) attempts to assign a probability to each sentence x. It does so by learning a probability distribution p such that p is a function that satisfies

One can construct a language model from recurrent neural networks. An artificial neural network is a type of learning algorithm inspired by biological neural networks (such as the brain, where clusters of neurons fire at each other to produce interactions). It is used to approximate functions (such as the one we need in language models and classification) that depend on a large number of inputs (usually too large to derive analytically). The basic unit of a neural network is a perceptron. A perceptron is essentially a function that produces an output based on its input and a threshold. Each perceptron takes inputs and weights and outputs a number between 0 and 1. These perceptrons can be layered into networks, essentially passing on the output of one perceptron to another. Most modern neural networks, however, use a less discrete function instead of a perceptron to smoothen out changes in output. This does not alter the behavior of the perceptron other than smoothing out the values in the middle. This modified perceptron is termed a sigmoid neuron. Then, one can simply pass the neural network inputs, and keep modifying the weights of each sigmoid neuron until the outputs best match the input. This process is referred to as “training” the neural network, and is performed via stochastic gradient descent. This process minimizes the error on test data. A recurrent neural network is simply a modification of the standard neural network by allowing for back cycle. The addition of the context units allows for the hidden layer’s output to be transferred back as input to itself. The distributed representation of words, phrases, and sentences is, in turn, an attempt to normalize text into uniform length feature vectors that will make texts easier for processing. Essentially, the distributed representation approach is a simpler version of the neural network model described earlier, where only one Columbia Economics Review


Spring 2017

13

individually. The joint result is evaluated as well. The trained models are then used to predict Reddit and Bitcointalk messages. The messages are first cleaned by converting all characters to lowercase and removing non-alphanumeric characters. Messages are passed to the trained models, and classified. Sentiment Analysis

single hidden layer exists with no back cycles (usually neural network models have more layers to model higher level abstractions of languages). This deficiency is made up for by requiring a larger dataset and better statistical methods to understand these larger datasets. This approach attempts to understand the relationships between words and maps each word to a position spatially in a N dimensional space. Spatial distance between words reflect relationships. Closer words are similar contextually, and this process captures many linguistic regularities. The initial paper describing this approach allowed only for words, hence the implementation is termed word2vec. However, the authors improved on this algorithm to include sentences and documents, allowing for entire sentences and documents to be mapped spatially, essentially allowing an user to find how similar sentences are to each other by taking the cosine distance of the vectors of each sentence. These vectors can in turn, be used to train statistical models used in classification, such as Support Vector Machines (SVM). A support vector machine (SVM) is essentially a linear separator on multiple dimensions. To visualize this, in 2 dimensions, a support vector machine essentially looks for the optimal separating line between two classes by maximizing the margin between the classes’ closest points – the points lying on the boundaries are called support vectors (hence the name) and the middle of the margin is the optimal separating hyperplane. Newer points of unknown classes can then be classified based on their position in the map. In higher dimensions, lines are generalized to hyperplanes. To train the models, I used the Cornell Internet Movie Database Corpus collated by Pang that assigns a sentiment rating

to each of the 75,000 movie reviews. Each of the movie reviews are labelled “positive” or “negative”, with equal numbers of each. I split the training corpus into two: a training set and a testing set. 50,000 reviews were used for training the two models with 25,000 each for positive and negative. After training the model, the remaining 25,000 (12,500 for each sentiment) is used to test the accuracy of the model. The trained models produced state of the art accuracy on the test data. Each model produces a probability estimate of the message being “positive” and “negative”. Each model is evaluated

Columbia Economics Review

I used % of positive messages per day as a measure of daily positivity. This allows me to normalize for a differing number of messages per day, attaining a number between 0 and 1. The date range considered is again September 2013 to October 2014. Sentiment before September 2013 had high volatility due to the few number of messages per day. While there is significant variation in sentiments over this period, and one can observe variations somewhat tracking market prices, the time series is still too noisy and standard regressions yielded inconclusive results against market price and trading volume. The level of noise is to be expected, since the sentiment model has an accuracy of only 91% (and that is when tested against the testing corpus, which is also movie reviews instead of


14

Spring 2017

attention time series by reducing the various attention sources into a single index using principal component analysis. A sentiment time series was created by training two models: Recurrent Neural Network Language Model and a Distributed Representation + Support Vector Machine Model using the Cornell IMDB Corpus. I predicted the sentiments of the 2 year dataset and found positive, significant relationship between past attention and future volatility of Bitcoins as well as a positive, significant relationship

“There is a positive, weak relationship between sentiment changes and future market returns.” forum chatter). However, large jumps in the sentiment can still be used. We can assume that small jumps in sentiment are due to noise in the models, but large intra-day jumps are reflect of changes in sentiment in the community. Hence, I can identify the days with the largest (top 5%) jumps in changes in sentiment. Then for those days, I measure the contemporaneous returns and forward returns using the day’s last trading price, where

Where Rc,t is the contemporaneous returns at day t, Rf,t is the forward returns at day t, and R is the mean of daily returns over the sample period. Then with the two sets of days (one each for positive and negative), I can construct the excess returns for positive and negative sentiments.

The top 5% largest jumps equated to 14 jumps, hence the averaging was done over 14 samples, each for positive and negative. I found weak but positive relationships between sentiment and returns – negative sentiments resulted in negative contemporaneous and forward excess returns, while positive sentiments resulted in positive contemporaneous and forward excess returns. However, the standard deviation is high due to the inherent volatility of Bitcoin prices, and this result is weak at best. However, this result stays true when the value of M, the number of days in a particular sentiment, is varied from 10 to 30. Hence while the standard deviation of the result is high, the result is rather consistent. Although the relationship is weak, the incidence of positive relationship is not due to the selection of M. Hence, I conclude that a weak and positive relationship exists between sentiments for both contemporaneous and forward returns. Conclusion In this thesis, I constructed a 2 year dataset of Bitcoin community chatter from Reddit and Bitcointalk and compiled the dataset into a CSV format for easy storage and retrieval. I constructed

Columbia Economics Review

between past attention and speculative trading volume. There is a positive, weak relationship between sentiment changes and future market returns. However, the noise of the sentiment time series leaves much to be desired. An improvement could be accomplished via using a more relevant training corpus (movie reviews are not the best training data for financial chatter). The MPQA Opinion Corpus contains news articles from a wide variety of news sources manually annotated for opinions and sentiments. However, the corpus is rather unwieldy and extracting sentiments requires a rather extensive script, so it is left out of this thesis. Furthermore, finetuning the parameters of the models may lead to lower noise levels too. Finally, better cleaning of the messages dataset (for example stemming words to their root form, adding Part-of-Speech tagging to words to differentiate words such as good (noun) and good (adjective)), and using emoticons to aid in training are possible directions. Furthermore, the data is mined at the end of 2014, leaving out much of 2015 in the data. Bitcoin prices have stabilized a lot more in 2015 (after the great boom at the end of 2013). Perhaps, attention and sentiment could be better predictors in a slightly less volatile environment. n


Spring 2017

15

Between a BlackRock and a Hard Place The Asymmetric Effects of Corporate Tax Changes on Employment Brendan Moore Columbia University Corporate tax reform is a particularly relevant issue following the recent predominance of conservative legislators across the country. The intricacies of these seemingly straightforward policies, however, cannot be underestimated. This paper offers a data-driven perspective on the asymmetric nature of tax cuts on employment, wherein tax rate cuts have a substantially less significant effect on unemployment rates than do tax rate increases. The study’s focus on the income apportionment formula further suggests that policymakers must consider the impact that corporate tax changes have on an individual firm’s economic choices. By analyzing how tax policy influences investment in labor, Moore sheds light on how corporate decisionmaking determines crucial trends in the macroeconomy at large. - A.W. Introduction In response to declining numbers of manufacturing jobs, an OECD-high corporate income tax rate, and a slow post-recession recovery, United States policymakers are debating corporate tax reform. While the federal corporate income tax has not seen substantial change since 1994, state legislatures frequently adjust the way in which their jurisdiction’s tax business profits. Since 2008, fifteen states have reduced their statutory corporate income tax rates and twelve states have adjusted the means through which taxable income of multistate corporations is apportioned. Since such reforms are often enacted with the stated goal of increasing employment, the objective of this empirical examination is to inform the debate about the effects of corporate tax rates and income apportionment formulae on the resulting levels of employment. While the economic impact of corporate taxation has been thoroughly studied, analysis has often focused mostly on the federal level (Romer and Romer 2010).

Research concentrated at the state level has often been limited to explanations of how corporate income taxation affects non-employment economic indicators, such as growth and business location decisions (Buss 2001, Bartik 1985). Meanwhile, literature that seeks to estimate the effect of state corporate tax policy on employment often neglects the income apportionment formula, a crucial component of tax policy that likely influences a company’s labor investment decisions. Although some literature examines the income apportionment formula’s effect on employment, the dependent variable is either sector-specific or uses an inappropriate measurement of employment. Further, few studies have examined the existence of asymmetric effects of tax changes. This empirical examination will estimate the effects of the income apportionment formula and statutory tax rates on a state’s employment level and will seek to identify asymmetry in the effect of a tax policy change on employment level. Panel data for states from 1979 to 2015 and control for various structural economic variables

Columbia Economics Review

will be used to analyze the relationship between state corporate income tax policy and employment. Examining the consequences of changes in corporate tax policy is challenging for several reasons. First, changes in tax policy are unlikely to be random and are instead altered as a result of economic and political conditions. In recessions, for example, state legislatures may vote to decrease the corporate income tax rate with the expressed intent of easing the burden on businesses. However, a state legislature may also reduce corporate taxes for reasons independent of economic conditions, such as the ideological leanings of elected officials. Even if changes in tax rates were implemented randomly, assumptions about macroeconomic variables of interest would still be necessary in order to develop a counterfactual that allows for estimates of the effect of a corporate tax change on employment. Since the data used to measure employment level includes the total number of both full-time and part-time jobs, this analysis cannot isolate the effects of the corporate tax rate


Spring 2017

16 on full-time employment, a measurement in which policymakers express greater interest.

“[O]ur results indicate that in practice, state employment levels are less sensitive to the payroll weights than they are to statutory tax rates.” Literature Review This paper will build upon insights from past work that focused on the relationship between state-level corporate tax policy and employment. Empirical work on the employment effects of U.S. state corporate taxation remains inconclusive. Bartik’s 1992 literature review of research from 1979 to 1991 concludes that corporate income tax rates had a negative impact on business activities, including employment, output, business capital stock, and number of business establishments. In a separate literature review, Wasylenko (1997) noted that two of three empirical studies, which focused strictly on the relationship between corporate tax rates and employment, found that increased tax rates produced a significant negative effect. Wasylenko also observed that, over time, tax differences between states have become a less significant determinant of employment. However, other literature suggests that while personal income tax rates reduce employment growth, the corporate tax rate variable does not affect job growth (Goss and Phillips 1994). Ljungqvist and Smolyansky (2016) used a differencein-difference border-discontinuity approach to determine that, while state corporate income tax rate increases are unambiguously harmful to employment level, a corporate tax cut has no statistically significant effect. While the above literature is generally sound in methodology of statutory corporate income tax rate analyzation, studies will fail to capture the

comprehensive effects of corporate tax policy on employment if the income apportionment formula is omitted from the analysis. The income apportionment formula allows a multi-state corporation to divide its profits into an in-state and out-of-state portion, based on the company’s presence in that jurisdiction. Each state selects its own apportionment formula weights, in the same manner that it chooses a statutory corporate income tax rate. The purpose of income apportionment is to avoid double taxation of a corporation’s profits. However, when state corporate income taxes were adopted in the first half of the 20th century, common standards for partitioning multi-state corporate profits among the various jurisdictions did not exist. In the 1950’s, state tax officials in more than 20 states agreed to the common use of an equally-weighted three-factor formula that considered a company’s sales, property, and payroll. However, in Moorman Manufacturing Co. v. Bair, 437 U.S. 267 (1978), the Supreme Court held that a state’s use of an equallyweighted three-factor formula was not constitutionally binding. The income apportionment formula states that if a firm’s profit is π, its income attributed to state j, πj, is

where P is total property, W is total payroll, and S is total sales for a given company, PJ, WJ, and SJ are the property, payroll and sales in state j, and is the weight in the apportionment formula for factor f in state j. McLure (1980) has shown that the three-factor formula effectively reduces the corporate income tax to direct taxes on payroll, property, and sales. This direct tax is defined as such since a firm’s overall marginal tax rate in state j, τj, with an apportionment formula and statutory marginal tax rate tj is

Equation (2) demonstrates the importance of both the statutory marginal tax rate and the weighted factors of the apportionment formula. Although there is ample theoretical work suggesting that both the statutory tax

Columbia Economics Review

rate and apportionment formula should affect firms’ employment decisions, existing empirical work has been less clear. Goolsbee and Maydew (2000) estimate the income apportionment formula has a significant effect on statelevel manufacturing employment. Clausing (2016) finds that, between 1986 and 2012, there is little evidence that state employment is sensitive to the corporate payroll tax burden, which is the payroll weight interacted with the statutory tax rate. In the studies of both GoolsbeeMaydew and Clausing, the dependent variable does not accurately represent the employed population that is most likely to be affected by a corporate tax rate. Goolsbee and Maydew only study effects on manufacturing employment, while Clausing uses a measure of employment that includes sole proprietors and partners, who work for businesses that are subject to personal rather than corporate income tax rates. Data This study compiles a panel data set on state-level corporate income tax rates, apportionment formula weights, employment statistics, and various economic control variables from 1979 to 2015. 1979 was selected as the first year in the sample because it was the first year after the Moorman ruling, which standardized apportionment formula rules. There have been 158 statutory corporate tax rate changes and 82 apportionment formula changes between 1979 and 2015, providing the sample with adequate variation to more precise estimates of their effects. State employment statistics were extracted from the Personal Income and Employment Summary produced by the Bureau of Economic Analysis’s (BEA) Regional Economic Accounts. The BEA measures the number of both wage and salary jobs as well as proprietors’ jobs in each state. In its estimates of employment, BEA gives equal weight to full-time and part-time jobs and counts employment by place of work rather than the worker’s place of residence. All estimates in this study are obtained using the number of wage and salary employees as the dependent variable. In 2015, wage and salary employees accounted for 77.5% of the United States labor force. State tax parameters were collected from the Tax Foundation, various state law and reference libraries, and the Robert M. La Follette School of


Spring 2017 Public Affairs, University of WisconsinMadison. In 2015, among the 44 states with a corporate income tax, 29 states imposed a single flat rate tax and 15 states instituted a progressive tax on profits. The remaining 6 states, recorded as having a 0% corporate income tax rate, either levy a tax on revenue or do not collect a business tax at all. The relative proportion of states which impose either a flat tax, progressive tax, or no tax remains nearly constant throughout the sample. In the case of states with progressive tax codes, this analysis employs the top marginal tax rate as the tax rate parameter, given that for most corporations, profits well exceed the highest threshold of taxation. Therefore, the highest marginal rate is the best estimate for the income tax rate that the company will encounter. Apportionment formula data included the relative weights on the payroll, property, and sales factor for each state. In 1979, 35 states among the 44 states with a corporate income tax used an equallyweighted three-factor apportionment formula. In 2015, only 9 states used such a formula. Economic control variables were also essential to this study. Income of wage and salary workers was also extracted from the BEA Regional Economic Accounts, and was divided by the number of wage and salary workers to obtain a mean income for each. This figure was then adjusted for inflation with the Consumer Price Index, obtained from the Department of Labor. Average state-level personal income tax rate, another economic control, was extracted from a National Bureau of Economic Research (NBER) database constructed with use of microdata from the Statistics of Income Division of the Internal Revenue Service. See Table 1 for a table of descriptive statistics. While our employment variable better measures the working population most likely to be affected by corporate tax policy, there remain complications with our dependent variable. Included within the wage and salary employment variable are government workers and employees of other entities that do not pay a corporate income tax (sole proprietorships, nonprofits, and S-corporations). The payroll factor weight, while expressed as an average above, takes on a value in the data of either 0.00, 0.25, or 0.33 in 95.68% of observations. The average top marginal statutory corporate income tax rate in the 50 states (Figure 2) increased from 1979 until the early-1990’s and has since been trending

Columbia Economics Review

17


Spring 2017

18

correlated with tax variables that affects employment level). For this reason,

downwards ever since. The average weight on the payroll factor (Figure 3) has consistently trended downwards since the Moorman ruling clarified that the equal-weighted three-factor formula is not required. Given consistently declining payroll factor weights and recently-declining statutory tax rates, the interaction of these two figures has also unsurprisingly decreased also. Indeed, the average tax burden with respect to payroll (Figure 4) in 2015 is less than half of its level in 1979. Methodology Specification Using this data, our basic empirical model will regress the log of wage and salary employment in state i in year t as follows:

This fixed effects model includes Zit, a vector of state-specific time trends and controls such as population level, personal income tax, average income per salary and wage earner, average employee income growth, and percent of the workforce in manufacturing. The fixed effects model also includes time dummies (λt) which absorb macroeconomic conditions that affect all states such as recessions, interest rates, and federal government spending. Taxit is our variable of interest which, depending on the specification, may be statutory corporate tax rate, corporate tax burden, payroll weight, or magnitude of tax increase or decrease. Due to the nature of the data set, panel data econometric models perform better

than pooled ordinary least squares (OLS). A Hausman test was performed on the model specification and suggested that fixed effect panel models are preferred to random effects. F-tests were performed on various specifications for the presence of both state fixed effects and time fixed effects. In general, while the coefficients for each individual state and year may be significant in the specification, it is possible that jointly their effect could not be significant. All regressions in this paper (both fixed effects and probit) use heteroskedastic robust standard errors, given the variation in size between states. In the results section, specifications with and without time fixed effects are reported, and all specifications have state fixed effects. Although Shuai (2013) was not able to reject the null hypothesis of no state fixed effects, our F-tests indicated a p-value of 0.00 for rejecting the absence of both state and time fixed effects. The disagreement between our results and Shuai’s results may follow from substantial methodological differences. Indeed, use of a state fixed effects model is essential to prevent spurious statistical correlations that could easily result from omitted variables. In a model similar to (3) but that omits state fixed effects,

the composition of error term becomes where is an unobserved fixed effect (i.e. a characteristic specific to each state

Columbia Economics Review

meaning the estimates of specification (4) will be biased. For example, if a given state is business-friendly (in some unobservable manner) and is therefore also more likely to have lower tax rates, then without state fixed effects, estimations would risk attributing increases in employment solely to the low tax rate, when other aspects of the state’s policies may be far more important. By including state fixed effects,

A matter of great methodological importance is the decision to include state-specific time trends in addition to state fixed effects and year dummies. The general fixed effect model with time dummies can control for both unobserved time-specific changes that affect all states and unobserved statespecific characteristics that exist across time. However, such a model can only capture phenomena that vary across time within a given state through explicitly including state control variables (i.e. population, manufacturing percentage in labor force, income tax rate, etc.). In the absence of such variables, changes in the dependent variable within a state across time not captured by time or state fixed effects will remain unexplained. Including state-specific time trends ameliorate this problem by accounting for trends of numerous explanatory and control variables in individual states. Regression results are reported both with and without state-specific time trends. Independent Variables Certain specifications use variables that account for the change in the statutory rate for a tax hike and the change in the statutory rate for a tax cut.2 The magnitude of tax cut and hike variables resemble the “first difference” of the statutory tax rate, except the values for both variables are always greater than or equal to zero since a “change” is a strictly positive value. Despite the magnitude of a tax cut and magnitude of a tax hike’s similarity to a first difference, specifications with these variables are not first difference models. These particular fixed effects regressions, although losing explanatory power


Spring 2017 about the level of the statutory tax rate on employment is lost, provide insight into the potential asymmetric effects of changes in the tax rate on employment level. Further specifications also use magnitude in the changes of both the payroll weight of the income apportionment formula and corporate payroll tax burden. Specifications with change in payroll weight only include magnitude of a cut, because no state has ever increased the weight of the payroll factor in the income apportionment formula. Other specifications use a constructed a corporate payroll tax burden that is simply the product of the statutory tax rate and the payroll factor weight. While this variable ought to capture a corporate payroll tax burden, the true burden for a company will also be dependent on statelevel tax incentives and exemptions for employment that are not recorded in the data. In addition, corporate tax avoidance strategies such as dividing into dozens of subsidiaries despite functioning identically as one incorporated entity are not captured in the model. Nevertheless, accounting for the effect of both the statutory rate and the apportionment formula on employment level affords this study explanatory power regarding two key policies over which legislators possess clear power. Controlling for employee income was another methodological concern. While a state’s level of employment likely depends on both the level of mean employee3 income and the growth of mean employee income, previous literature has been inconsistent regarding which variable should be included in regressions. In this paper, regression results are reported with specifications which control on the natural logarithm of mean employee income.

and tax rates. However, this paper’s analysis is responsive to these concerns in a number of ways. With respect to the first endogeneity concern, businesses are nearly always aware of a tax policy change many months before it takes effect. Even in bad economic conditions, employers that are aware of impending tax relief may choose to expand hiring. Second, use of state-fixed effects in all specifications minimizes the

“Across all specifications [...], the effect of the statutory corporate income tax rate was determined to be insignificant.”

risk of misattributing the effects of a state’s underlying economic fundamentals to corporate tax parameters. Nevertheless, this paper follows the approach of both Goolsbee (2000) and Clausing by testing for possible endogeneity with a probit regression on the likelihood of a tax policy change depending on various independent

Endogeneity Concerns Certain specifications use variables Studies about the effect of the tax policies can be afflicted with concerns about endogeneity, given that tax changes are not assigned randomly. As Clausing (2016) notes, endogeneity concerns could bias the coefficients in opposite directions. For example, if a state legislatures select payroll weights and statutory tax rates aimed at stimulating a weak economy, regression results may find a positive relationship between employment and higher payroll weights or tax rates due to this policy impetus. Likewise, as previously mentioned, the magnitude of coefficients may be overstated in the case where states that are inherently friendly to business also adopt lower payroll weights

Columbia Economics Review

19 variables. Regression results for probits for the likelihood of both a cut in the statutory tax rate and a cut in the payroll weight of the apportionment formula are reported in the results section. Results General Statutory Corporate Income Tax Rate Table 5 summarizes the effect of statutory corporate income rates on employment level between 1979 and 2015. The general specification follows the form of equation (3), where Taxit contains only the state’s statutory corporate income tax rate.4 Throughout all analysis, Zit is a vector for state control variables including population, average wage and salary employee income, average personal income tax rate, and fraction of labor force in manufacturing.5 All analysis in Table 5 includes state-fixed effects and certain specifications also included year dummies and state-specific time trends. Across all specifications in Table 5, the effect of the statutory corporate income tax rate was determined to be insignificant. Specification (1) had an unexpectedly positive coefficient for the corporate tax rate, while the sign on the coefficients in (2) and (3) is negative. Adding time fixed effects changed the coefficient on the corporate tax rate variable from positive to negative. Adding state-specific time trends increased the magnitude of the negative coefficient on the tax rate variable,


20 although the p-value in specification (3) was still 0.630, meaning the effect of the corporate tax rate on employment still cannot be identified to be significantly different from zero. The expected sign on the tax rate coefficient in specifications (2) and (3) supports the case for time fixed effects, which are likely essential to include because macro shocks to the national economy affect employment levels in all states. Further, adding statespecific time trends – in other words, de-trending those unobserved factors which affect employment level within a state – are likely methodologically sound, since over a 37-year period the influence of unobserved factors within a state determining employment is likely

Spring 2017 to increase or decrease. In the absence of state-specific time trends, the effect of tax rates may go undetected in the case that their potential true effect on employment is in the same direction as the unobserved state-specific factors. The coefficients for the statutory tax rate from Table 5 support the claim that the level of the statutory rate itself has no statistically significant effect on a state’s employment. This conclusion is in accordance with the recent findings of Ljungqvist (2016) and Clausing (2016). Asymmetric Effect of Changes in Statutory Tax Rate After examining the level of the statutory rate, we then examine the effect

Columbia Economics Review

“[S]tudies will fail to capture the comprehensive effects of corporate tax policy on employment if the income apportionment formula is omitted from the analysis.” of changes in the statutory rate on a state’s level of employment. In the regressions for Table 6, Taxit is a two-variable vector containing both the magnitude of a tax cut and the magnitude of a tax hike. The dependent variable in specifications (1) – (3) is the natural log of wage and salary employment. In specifications (4) – (6), the dependent variable is the log of all employment. The results from the first three regressions of Table 6 indicate a likely asymmetric effect of changing the statutory corporate income tax rate on the employment level of wage and salary employees. Specifications (1) – (3) reveal a significant negative effect of a tax hike on the level of employment. Importantly, the coefficient on the magnitudes of the tax hike variable was significant in the specification with state-specific time trends, a model which most accurately measures of the true effect of the policy change. The effect of the magnitude of a tax cut, while expectedly positive in all specifications, was not determined to be statistically significant from zero. While the p-value in specifications (3) indicate near significance of the effect of a tax cut on employment of wage and salary earners, one can conclude that changes in the corporate income tax rate on employment are asymmetric, whereby a tax increase’s costs outweigh the benefits of a tax cut. In specifications (2) and (3) in Table 6, the coefficients for the magnitude of a tax change show that a 1% increase in the corporate income tax causes a decrease in the state’s employment level by between 0.40% and 0.68%. Meanwhile, a 1% decrease in the corporate income tax rate has no statistically significant effect on a state’s level of employment. This result supports the findings of Ljungqvist, who


Spring 2017

21

although uses different methodology than this study, also concludes that a corporate income tax rate increase decreases employment while a tax cut has no significant effect. It contributes to the small but growing literature on asymmetric effects and incidence of tax policies (Benzarti et al.) To demonstrate how different types of employment measurement impact the dependent variable, Table 6 repeats the same three specifications using total employment, including sole proprietors and partners, as the dependent variable. Notably, specification (5) and (6) do not find that an increase in the corporate income tax rate has a significant negative effect on the overall employment level. However, when the same variables are regressed on the wage and salary employment, the results indicate employment level is responsive to the corporate income tax rate. The distinction between the magnitudes of the coefficients is to be expected. In specifications (4) – (6), tax parameters and control variables are regressed on a measure of employment that includes certain workers whose employment should not depend on the corporate tax rate. These workers which our study seeks to exclude from the analysis in the first three specifications constituted a non-negligible 22.5% of the American workforce in 2015. The sizable fraction of sole proprietors and partners explains the degree to which the coefficients corresponding to the distinct dependent variables differ. Moreover, the results of

“[L]iterature that seeks to estimate the effect of state corporate tax policy on employment often neglects the income apportionment formula, a crucial component of tax policy that likely influences a company’s labor investment decisions.”

Table 6 should give pause to findings of studies such as Clausing (2016), which use all employment as a dependent variable and determine that employment level is unresponsive to corporate tax parameters. Corporate Payroll Tax Burden In addition to examining the asymmetric effect of a statutory tax rate change, we analyze the effect of reducing the corporate payroll tax burden on the

Columbia Economics Review

wage and salary employment level. Again, the corporate payroll tax burden is the product of the statutory corporate tax rate and payroll weight. In each of the six regressions in Table 7, none of the coefficients for various corporate tax policy parameters demonstrate significance. The unexpectedly positive (although insignificant) coefficient on the corporate payroll tax burden variable in specification (1) reaffirms the importance


22

Spring 2017 estimates for their effects.

“[O]ne can conclude that changes in the corporate income tax rate on employment are asymmetric, whereby a tax increase’s costs outweigh the benefits of a tax cut.” of year dummies to a sound model. Specifications (2), (4), and (6) break up the corporate payroll tax burden into its constituent parts, yet fail to detect a statistically significant effect of either the income tax rate or the payroll weight on employment. In specification (5) in Table 7 and specification (3) in Table 5 are nearly the same regression, except the independent variable in the former is the corporate payroll tax burden, and the latter is the statutory tax rate. The two regressions have the same adjusted R-squared value and the coefficients for their control variables are nearly identical. Meanwhile, the coefficient in specification (5) in Table 7 for the corporate payroll tax burden is of greater magnitude and has a lower p-value than its counterpart for the tax rate. This difference in the magnitudes of the coefficients are in accordance with McLure’s (1980) claim that employment level should be more sensitive to the level of a state’s payroll burden than its statutory tax rate. Nevertheless, neither tax parameter has a significant negative relationship with employment, so these results are not conclusive. Further, simultaneity of changes in the payroll factor weight rendered measuring the corporate payroll tax burden’s effect on employment difficult. In the mid-2000s, many states dramatically cut their payroll factor weight, often to 0.00. These cuts entailed a sharp reduction in the mean payroll tax burden (Figure 4). In 2007, for example, 9 states – a fifth of all states with a corporate income tax at the time – simultaneously changed their payroll factor weights. In addition, 7 states simultaneously changed their payroll weights in 2006. Such synchronized changes in tax policy impede our ability to obtain unbiased

Endogeneity As stated in Section III, although the model in this paper takes steps to avoid issues of endogeneity, it is also useful to examine possible determinants of policy changes. This section includes probit analyses with the determinants of state decisions to decrease the statutory corporate income tax rate and lower the payroll weight in apportionment formula. Both variables show specifications that include varying numbers of independent variables. The baseline specification when running probit analysis on corporate income tax in the apportionment formula models the policy change as depending on the state’s employment growth, the mean corporate income tax rate of all states, a unified Republican government (i.e. Republican control of both houses of the legislature and the executive branch), and a unified Democratic government. The specification (2) adds further structural economic and tax variables, such as average wage and salary employee income, fraction of employees in manufacturing, top marginal personal income tax rate, and the mean payroll weight of all states. Specification (3) includes lags of prior values of several variables. Table 9 executes the same specifications as Table 8, but switches corporate income tax rate and payroll weight when applicable. The most notable feature of the probit regressions is the small number of statistically significant apparent determinants of policy changes. In analysis of corporate income tax rate in the income apportionment formula, the level of other states’ corporate income tax rates is only a significant determinant of a tax cut in the baseline specification. Average employee income and top personal income tax rates are also associated with a cut in the corporate income tax rate. Importantly, the level of other states’ corporate income tax rates is not a significant determinant of a tax rate cut in specifications (2) and (3). These results provide substantial evidence that decreases in the corporate income tax rate are not explained well by observable variables. This in turn reduces possible policy endogeneity concerns with respect to corporate income tax rates. Income tax rate on employment are asymmetric, whereby a tax increase is of greater cost than the benefits of a tax cut. The same cannot be said, however,

Columbia Economics Review

about the payroll weight in the income apportionment formula. For the baseline specification of the payroll weight probit regression, the mean payroll weight of all states is a strongly significant determinant of a state’s decision to cut its payroll weight. The coefficient for mean payroll weight suggests that the lower the average of all other states’ payroll weights, the more likely a state is to cut their own payroll weight. Across all three specifications, we observe the mean payroll weight is a significant determinant of a state’s decreasing of its own payroll weight. Moreover, a state with a higher percentage of its workforce in manufacturing is much more inclined to cut its payroll tax, likely to encourage manufacturers to keep jobs in-state. In the corporate income tax rate regression however, the percent of manufacturing jobs in a state’s economy is not a determinant of the state’s tax rate. Partisan control of a state’s legislature does not appear to be a determinant of state tax policy parameters. Nevertheless, variables such as average employee income, mean corporate tax rate, and lagged employment growth are all determinants of a state’s decision to cut its payroll weight, contributing to possible endogeneity concerns. This evidence – particularly a state’s

“The coefficient for mean payroll weight suggests that the lower the average of all other states’ payroll weights, the more likely a state is to cut their own payroll weight. ” responsiveness to other states’ payroll weights – corroborates the work of Edmiston (2002) and others who have constructed general equilibrium models for the payroll weight of state income apportionment formulae, resembling a prisoner’s dilemma game. Such game theoretical research on state corporate taxation often focuses on the apportionment formula rather than the tax rate. For this reason, it is


Spring 2017 perhaps unsurprising that endogeneity may be more of a concern with respect to payroll weights rather than with respect to the tax rates. Importantly, the results of the corporate income tax rate probit regression, which ease concerns about endogeneity of cutting tax rates, substantiate this paper’s earlier claim about the asymmetry of the effects of cutting the corporate income tax rate. Namely, there is still strong evidence to suggest that the corporate income tax hike has harmful effects on employment while an income tax cut has no effect on employment.

formula should affect employment. However, our results indicate that in practice, state employment levels are less sensitive to the payroll weights than they are to statutory tax rates. The methodological concerns and limitations of this paper underscore the fact that these results are not entirely conclusive. As mentioned, this analysis explains the effect of corporate tax policy on all jobs, rather than only full-time employment. Second, although previous research has indicated endogeneity is not a grave concern, this analysis indicates that it may be an issue for at least the study of the income apportionment

“[T]here is still strong evidence to suggest that the corporate income tax hike has harmful effects on employment while an income tax cut has no effect on employment.” Conclusion Economic theory posits that a corporate income tax places a burden on a firm’s capital and labor. Therefore, a reduction in corporate income tax suggests increased investment in one of these factors of production. A common question among policymakers is whether a decrease in the corporate income tax rate will result in increased employment. The results of this study challenge claims that the statutory corporate income tax rate alone decreases employment. However, there is strong evidence of the existence of an asymmetric effect of a change in the corporate income tax rate. While a 1% increase in the tax rate has a significant negative effect on employment of approximately 0.4%, the effect of a reduction of the tax rate by the same amount is not statistically significant. In the case of an apportionment formula, economic theory suggests that the tax burden specifically falls on the formula’s components of payroll, property, and sales. As a result, changing the weight of the payroll factor in the apportionment Columbia Economics Review

23 formula. For this reason, different program evaluation approaches will likely produce more efficient estimators for the effect of the payroll factor weight on employment. Nevertheless, this study contributes to the current literature by finding further evidence for asymmetric effects of state corporate income tax rate changes on employment. n

Appendix


24

Spring 2017

Get Off my Intellectual Property Intellectual Property Rights and Foreign Direct Investment: An SGMM Approach to the Case of Selected Developing Countries between 20002014 Vincent Jerald R. Ramos University of the Philippines

It is not uncommon in today’s investment landscape for firms that venture into new markets to find their intellectual property rights infringed; in a few extreme cases, some firms even find themselves sued by the infringers. Ramos adds to existing literature with his novel analysis on per capita IPR to approximate a country’s quantity of IPR. Ramos’s discussion of the nonlinear relationship between between FDI and per capita IPR suggests that increases in IPR applications do not necessarily translate to higher FDI inflows until a minimum threshold of IPR applications is reached. The policy implication of Ramos’s study is particularly relevant when foreign investors are often deterred from expanding their operations in developing countries in fear of intellectual property right breaches. To stimulate FDI inflows, policymakers should focus on protecting existing IPR applications, rather than spending disproportionate efforts on encouraging more applications. ––M.Y. 1. Introduction Foreign Direct Investment (FDI), measured locally, is the cumulative value of all investments in the home country made directly by residents of other countries, primarily to companies, within a specified time period. It is an aggregation of equity capital, reinvestment of earnings, and other long-term and short-term capital as shown in the balance of payments (WDI, 2016). These foreign investments are a means to help achieve economic outcomes, and they do so by helping generate local employment, inducing transfer of technology and improving competition in industries. Among the many factors that affect FDI, Intellectual Property Rights (IPR) seem to be among the most interesting but relatively less explored. Indeed, the existing IPR-FDI literature can be divided into three strands by which they can be classified: positive relationship, ambiguous relationship, and negative relationship.

This paper builds on the previous study by Adams (2010) which asks “What is the impact of Intellectual Property Rights (IPR) Protection on Foreign Direct Investment?”. Adams uses panel data involving 75 developing countries over 19 years (1985-2003) and concludes that those with stronger IPR protections benefit from more foreign direct investment inflows. Adams finds that the variables IPR and FDI are significantly and positively correlated. As an extension of that study, the current paper includes four more developing countries from the East Asia Pacific Region, and focuses instead on more recent years (2000-2014), capturing new developments in intellectual property protection. Furthermore, this study contributes to the literature by examining the IPR-FDI relationship using two approaches. First, by using a qualitative indicator of IPR protection, which is the traditional approach in studies dealing with the IPR-

Columbia Economics Review

FDI relationship. And second, by using a quantitative measure of the stock of IPR in the form of per capita IPR. The latter approach has not been done in any previous study dealing with IPR-FDI relationship. This research has the following objectives. First, to verify the findings of the existing literature using a dataset that contains more recent years and dropping the pre-TRIPS (Agreement on Trade-Related Aspects of Intellectual Property Rights) era years, since the effect of the TRIPS agreement on the IPR-FDI link has already been established in various studies. Second, to test both the qualitative (IPR protection index) and quantitative (IPR per capita) effects of IPR on FDI. Hopefully, this research will be a useful contribution to further research and policymaking related to FDI and IPR regimes. The sample, a balanced panel dataset, consists of 79 low and middle income countries with average per capita real


Spring 2017 Gross National Income (GNI) of not more than $12,475 over the period 2000-2014. The estimation technique of this paper is based on Blundell and Bond’s (1998) and Windmeijer’s (2005) two step System Generalized Method of Moments (SGMM) to both obtain more asymptotically efficient estimates and address endogeneity issues. To preview the results, this paper finds that IPR protection has a positive and significant effect on Foreign Direct Investment Inflows. This supports existing literature on the positive IPR-FDI relationship (Mansfield, 1995; Maskus, 1998; Smarzynska, 2004; Adams, 2010; and Khan and Samad, 2010). Meanwhile, per capita IPR has a nonlinear relationship with FDI inflows. Nonlinearity is indeed observed in both models. For the quantitative model, this implies that increases in the stock of patents, trademarks, and copyrights do not necessarily translate to higher FDI inflows unless this stock reaches a certain threshold. This minimum stock is computed in Section 5 of this paper. The empirical evidence of the presence of nonlinearities in the IPR-FDI relationship is consistent with the study

of Asid (2004) and contradicts the findings of Adams (2010). This paper suggests that stronger IPR regimes are an additional incentive for foreign investors but are insignificant if not accompanied by bureaucratic reform that will minimize red tape and improve the overall investment climate of a country. 2. Discussion of Key Concepts For an enriched discussion on the IPR-FDI relationship, this section is devoted to explaining what some commonly mentioned concepts mean individually and what role they play in the economy. These key concepts include Foreign Direct Investment, Intellectual Property Rights, and Transfer of Technology. 2.1 Foreign Direct Investment For developing countries, attracting FDI is more strategic than portfolio investments (where investors are not involved in the management of the firm) since FDI can take the form of greenfield investment—wherein the investor starts a new venture by constructing operational facilities; joint ventures—wherein the

Columbia Economics Review

25 investor enters into a partnership with a company in the receiving country to establish an enterprise; or merger and acquisition—wherein the investor acquires an existing enterprise in the receiving country (WDI, 2016). In the case of Least Developing Countries (LDCs), during the year 2011 the bulk of FDI inflows comes in the form of greenfield investments, 35% of which are investments in mining, petroleum, or quarrying. Moreover, 50% of the greenfield investments in LDCs come from developing and transition economies (WIR, 2012). From this data, it can be noted that for LDCs, greenfield investments are desirable possibly because they generate employment, utilize idle resources, and induce transfer of technology. Investments made by Third World Multinational Corporations (TWMNC) to other developing countries play a more active role in bilateral flows. A conjecture on why these corporations have been successful is that Third World firms are in an advantageous position in terms of their familiarity with developing countries. Most of the developing countries have a common socio-economic back-


Spring 2017

26

investment inflows. In the Philippines, foreign ownership restrictions are a subject of debate among policymakers as the country, which has a 60-40 constitutional limit on foreign ownership, suffers from receiving the lowest FDI inflows compared with its ASEAN neighbors (Oplas, 2015). The core of the debate lies on the assumption that relaxing these restrictions will increase FDI inflows, as there are countries, such as China, which experienced such results. This issue is still subject to long and technical policy debates. In the meantime, it is worth taking a look at other factors that could possibly affect FDI inflows.

Figure 1: FDI Inflows as Percent of GDP (per income group) ground, ethnic and cultural environment, infrastructural conditions, and bureaucratic inefficiency (Nayak and Choudhury, 2014). During the period of interest of this study, countries in South Asia received the lowest FDI/GDP ratio relative to other regions. This reveals the institutional quality of countries in South Asia where insurgencies and civil rebellions adversely affect the investment climate and where rampant corruption stifles economic growth. Meanwhile, regions such as Latin America and the Caribbean as well as East Asia and the Pacific had slightly higher FDI/GDP. This can be attributed to stronger intra-regional cooperation in both regions. In the South American Summit in 2004, representatives from twelve South American countries signed the Cuzco declaration which aims to establish the “South American Community of Nations” (which was renamed in 2008). Meanwhile, the works for the common market vision of the Association of Southeast Asian Nations (ASEAN) began as early as 1992 when the ASEAN Free Trade Area was established. However, it was not until 2007 that countries began gradually lowering import taxes for intra-ASEAN trade. The figure above reports FDI/GDP ratio of income groups in 2000-2002 (Period 1); 2003-2005 (Period 2); 2006-2008 (Period 3); 2009-2011 (Period 4); 2012-2014 (Period 5). The takeaway observation from this figure is that the most affected countries in terms of FDI by tighter capital controls and the Global Financial Crisis of 2008-2009 were high income countries. Meanwhile, low income countries enjoy

a steadily increasing FDI/GDP ratio from the first period thereon. The effect of FDI on economic development has been the subject of many empirical studies conducted in the past. A particular study by Alege and Ogundipe (2014) used Generalized Method of Moments estimation technique to solve the problem of endogeneity in the FDIgrowth link. The study was able to find out that the contributions of FDI appear insignificant in the dynamism of GDP per capita of ECOWAS (Economic Community of West African States) despite the significant contributions of the control variables. However, since FDI is a recognizable tool to achieve certain economic outcomes, it is still important to look at the factors that encourage foreign direct

2.2 Intellectual Property Rights At this point, it is important to understand what intellectual property is and how IPR protection works. Sherwood (1990) says that intellectual property is a compounding of two things—private creativity and public protection. First, intellectual property ecompases ideas, inventions, and creative expressions. In other words, it is a result of “private activity”. Second, it is the willingness of the general public to grant the status of “property” on those inventions and expressions. Patents, copyrights, and trademarks are common types of IPR. A patent is a temporary right to exclude others from using or producing a novel and useful invention. A copyright is the temporary right of an author or artist to keep others from commercializing copies of his/her creative expression. A trademark is commonly a word or mark which serves as an exclusive identifier of a product or service.

Figure 2: Total Registered Intellectual Property Rights (per income group)

Columbia Economics Review


Spring 2017 IPR regimes provide protection for these inventions and technologies by granting the right of exclusivity. However, some public interests limit or condition this right. For instance, the right of the government of eminent domain circumscribes the right of exclusivity (Sherwood, 1990). The figure above shows the per capita IPR of countries grouped based on income. Upper middle income countries had per capita IPR figures that grew significantly every period. In fact, the total IPR registries of upper middle income countries in 20122014 was higher than those of high income countries in the same period. This is telling of the innovativeness of upper-middle income countries and of how aggressive they are in improving their IPR regimes. IPR regimes are the classical policy instruments to influence the generation, transfer and diffusion of technology. International rule-making has in turn, preponderantly focused on the protection of IPRs (Transfer of Technology, 2001). Unfortunately, many ASEAN countries have not yet realized the importance of strong IPR regimes in achieving a robust domestic economy. A case study by Oplas (2015) suggests that it will take time for many ASEAN countries and governments to fully recognize the value of clear IPR protection and its impact towards evolving into competitive econo-

“The end goal of any country should be economic development—an improvement in the overall welfare and living conditions of its people.” mies. The strength of IPR regimes can be best measured by indices that rate how well institutions are able to protect and enforce the IPR laws of a country. Such indices include the Ginarte-Park (GP) Index, which is commonly used by studies dealing with IPR, and the Economic Freedom of the World (EFW) IPR Protection Index, which is what this study uses. The figure above reports the IPR Protec-

27

Figure 3: Intellectual Property Rights Protection Index (per

income)

tion Index of countries grouped based on their income. The data in Figure 3 shows the gap between low and high income countries in terms of IPR protection and institutional quality which is partly captured by their IPR index.

The end goal of any country should be economic development—an improvement in the overall welfare and living conditions of its people. FDI inflows can directly contribute to economic development in the

2.3 Transfer of Technology One of the desirable consequences of attracting FDI is the transfer of technology that accompanies it. However, to understand how this affects the economy, the concept of transfer of technology should be first understood. Transfer of Technology describes technology as a “systemic knowledge for the manufacture of a product, for the application of a process or for the rendering of service”. This definition obviously excludes finished goods or delivered services, which are merely sold or bought, from the concept of “technology”. The most important aspect is the systemic knowledge which goes into the production process. This systemic knowledge, should it be more efficient than the local knowledge in manufacturing the same product, could contribute to more efficient production processes. Technological assets are necessary for large corporations to achieve competitive advantage, market power, and consequently higher profits. However, there is a caveat in focusing too much on the transfer of technology from foreign firms. The diffusion of technology may result not in the attainment of the high standards of living enjoyed by industrialized nations, but merely in the removal of comparative advantages. Thus, the shift of technological skills to low-wage developing countries may result largely in lower prices for developed country consumers rather than higher incomes for developing country producers (Blakeney, 1989).

“A case study by Oplas (2015) suggests that it will take time for many ASEAN countries and governments to fully recognize the value of clear IPR protection and its impact towards evolving into competitive economies.”

Columbia Economics Review

form of more jobs, competition with local industries to bring down prices, etc. Alternately, FDI inflows bring transfer of technology which is desirable to attain more efficient production chains. However, FDI inflows in the Philippines lag behind most of its neighbors in the Southeast Asian Region. Several factors affect foreign investors in making their decision on where to invest including ease of doing business, corruption perception, infrastructure development, geographical factors, political environment, wage laws, etc. This study intends to take a closer look at two other less popular factors that can affect FDI in-


28 flows—IPR Quantity and IPR protection quality. IPR quantity is indicative of the volume of technology and transferred and how much “new” knowledge is registered in a country. It is also indicative of the innovativeness of industries. Meanwhile, IPR protection quality is indicative of institutional quality, assessing the presence or absence and enforcement of IPR laws. 3. Review of Related Literature Conceptually, strengthening IPR regimes alone is an insufficient incentive for firms to invest in the receiving country. The receiving country must ensure that there is a properly implemented legal framework to protect the intellectual property by which the investor’s advantage is obtained. Therefore, mutual recognition and protection of IPR must be established. Empirical literature is divided on whether or not stronger IPR regimes are likely to positively affect FDI inflows. There are three strands by which existing literature can be classified. 3.1 IPR-FDI relationship is positive IPR is just one of the many interlinked components that attract FDI including tax system, competition regimes, corruption perception, and infrastructure development, among others (Maskus, 1998). The finding of Maskus (1998) is consistent with the conclusion by Adams (2010). The study uses a panel data involving 75 developing countries over 19 years (19852003) and boldly concludes that those with stronger IPR protection benefits from more foreign direct investment inflows. The variables IPR and FDI are significantly and positively correlated. This particular study takes a closer look at the pre- and post- TRIPS agreement era. The Trade Related Aspects of Intellectual Property Rights (TRIPS) agreement era came into effect in January of 1995, heeding calls for stronger reforms in IPR regimes. Under the terms of TRIPS, current and future members of the WTO must adopt and enforce strong non-discriminatory minimum standards of intellectual property protection in each of the areas commonly associated with IPR including patents, copyrights, trademarks, and trade secrets (Adams, 2010). Smarzynska (2004) has a different observation of the effect of IPR on investments in high-technology sectors relying heavily on IPR (such as manufacturing enterprises). In five of the six regressions, IPR protection affects the probability of investment in the said sectors.

Spring 2017

“The receiving country must ensure that there is a properly implemented legal framework to protect the intellectual property by which the investor’s advantage is obtained.” They bear positive and significant (at the five percent level) coefficients. This study utilizes a probit model using firm-level data based on the EBRD Foreign Investment Survey conducted in 1995 and the Ginarte-Park Index of IPR Protection (1997). Smarzynska (2004) gives a bolder conclusion based on the results of her regressions—all firms may be affected by IPR protection because IPR regimes play a signaling role. The finding of Smarzynska (2004) is consistent with the findings of Mansfield (1995) where he concludes that the “strength or weakness of a country’s system of IP protection seems to have a substantial effect in relatively hightechnology industries like chemicals, pharmaceuticals, machinery, and electrical equipment on the kinds of technology transferred to that country and the amount of FDI in that country by Japanese and German firms.” This study utilized linear OLS regressions. Meanwhile, a study on the effect of IPR on inward FDI in the case of 14 Asian and Southeast Asian countries concludes that stronger IPR regimes increase the likelihood of FDI not only in the manufacturing sector but also in retail and distribution networks (Khan and Samad, 2010). Khan and Samad used a panel data from 1970-2005 and empirical estimates were derived using pooled OLS techniques. 3.2 IPR-FDI relationship is insignificant or ambiguous An argument for a relaxed IPR regime is that the owner of IPR abuses its monopoly power over the first placement of a protected good or service by preventing Columbia Economics Review

parallel trade of the said good or service by third parties. Monopoly power is experienced by companies or individuals granted with IPR protection, since no other firm or person can replicate their product until such time that the IPR expires. A solution was applied in Europe and the US and was called the principle of exhaustion in the former and the first sale doctrine in the latter. Under this principle, IPR owners are no longer entitled to control the subsequent marketing strategies of the protected products beyond what is legitimately necessary to protect the subject matter of the rights (Transfer of Technology, 2001). Theoretically, the purpose of the exhaustion principle is justifiable. However, it is one of the most complicated regulations of international business. Fink and Maskus (2005) conclude that the welfare implications of a particular exhaustion regime are theoretically ambiguous due to limited empirical evidence and are likely to differ based on the form of IPR, and industry- and product-specific considerations. Braga and Fink (2001) provide evidence regarding the effect of patent protection on international trade by using a grav-

“A study on the effect of IPR on inward FDI in the case of 14 Asian and Southeast Asian countries concludes that stronger IPR regimes increase the likelihood of FDI not only in the manufacturing sector but also in retail and distribution networks.“

ity model of bilateral trade flows and estimating the effects of increased patent


Spring 2017 protection. The aggregates used are limited to total non-fuel trade and high technology based on expectations that the effect of stronger patent protection is more eminent in knowledge-intensive traded goods and services. This study concludes that the effects of IPR on bilateral trade flows are theoretically ambiguous. However, the estimation of the gravity model provides empirical evidence that higher levels of protection have a significantly positive effect on non-fuel trade. The effect of IPR on high technology trade is statistically insignificant. To further widen the knowledge on the IPR-FDI link, empirical research focusing on industry and firm-specific variables should be done. 3.3 IPR-FDI relationship is negative A surprising result was found by Yang and Maskus (2001), who sustain that IPR has a negative significant effect (1 percent level) in licensing receipts, whereas its squared term has a positive significant ef-

“Fink and Maskus (2005) conclude that the welfare implications of a particular exhaustion regime are theoretically ambiguous due to limited empirical evidence and are likely to differ based on the form of IPR.” fect (10 percent level) under a pooled OLS model using data from 1988, 1993, and 1998. This result is counterintuitive, as one would think investors would want to enter into licensing agreements in countries with better protections of intellectual property. Yang and Maskus provides a conjecture. Consider a small nation with a limited skilled labor endowment where imitation risk is slight. Improving IPR protection will further minimize this risk, thus lowering licensing costs to the benefit of the investing firm. The monopoly pow-

er effect would dominate the economic returns effect. Under this condition, there is less incentive for the firm to enter into more licensing agreements as it can just exploit the monopoly power it has. The findings of Yang and Maskus (2001) are consistent with the study of Gathii (2015), which concludes that extending strong IP protection – with a particular focus on patents – on least developed economies is unlikely to yield the positive economic benefits of stronger FDI flows or higher growth. The author cites the case of China, as it experienced massive industrialization and FDI inflows under a less than satisfying intellectual property regime. This example argues against the conclusions that strong IP protection is a prerequisite for higher FDI inflows. The study limits the role that IPR plays in FDI inflows. Strong IPR protection is crucial for maintaining the competitive advantages of early industrializers, but may not be a crucial determinant for the emergence of new ones (Gathii, 2015). Meanwhile, an earlier study of Maskus (2000) suggests that stronger IPR protections have short term net welfare losses but dynamic benefits in the longer run. However, these expected benefits in the long run are best likely small and can be easily overrun by short term losses and high cost of implementing laws and policies.

29 management interest in an enterprise other than that of the investor) are obtained by summing equity capital, reinvestment of earnings, other long term capital, and short term capital. Total IPR is obtained by adding all the patents and trademark applications in the country within a certain year made by both residents and nonresidents. IPR is scaled by population in order to adjust for the size of the economy since the number of IPR applications may suffer a “home advantage bias”. Thus, IPR per capita is used as the quantitative measure of IPR (Falk, 2004).

“The inverse of GDP per capita is a proxy for return on investment as used by several authors, including Asiedu (2002), Adams (2010), and Quazi (2007).”

4. Methodology This section provides information on the sources of data, estimation technique used, and preliminary data analysis. 4.1 Data Sources The study uses a balanced panel dataset composed of 79 low and middle income countries with average per capita real GNIs not exceeding $12,475 over five separate three-year periods: 2000-2002, 2003-2005, 2006-2008, 2009-2011, and 2012-2014. The World Bank notes that countries belonging to low, lower middle, and upper middle economies are classified as developing. The data for this study comes from various sources. The list of developing countries included in the sample is inspired by the list in Adams (2010). The data on these developing countries is mostly obtained from the World Development Indicators Databank of the World Bank. My data includes variables such as FDI Inflows, GDP growth, Trade Openness, Inflation, Population, Return on Investment, Telephone Subscriptions, and Total IPR. The net FDI inflows (investments to acquire a lasting

Columbia Economics Review

Trade Openness is obtained by getting the value of total trade (exports plus imports) as a percent of GDP. Data on inflation is based on the Consumer Price Index (CPI) and is used as an indicator of macroeconomic stability. The natural logarithm of population is used as a proxy for market size. The inverse of GDP per capita is a proxy for return on investment as used by several authors, including Asiedu (2002), Adams (2010), and Quazi (2007). The rationale for the use of the inverse of GDP per capita as a proxy is that the return on investment is a measure of profitability and that, as such, it should be positively correlated with the marginal product of capital, which is expected to be high in capital-scarce developing countries where per capita income is low (Quazi, 2007). Telephone Subscriptions (per 100 people) are indicative of infrastructure development and access to


30 technology. The Risk variable is a composite indicator of the overall investment climate of a country. It is obtained from the Political Risk Services’ Country Risk Guide and is made up of three subcomponents—political, financial, and economic risk. It is rated on a scale of 0 to 100 with 100 pertaining to the lowest risk.

“Around 97% of the countries in the sample experienced reductions in their Ownership and Regulation scores from 2000-2014.” The variables IPR Index, Foreign Ownership Restriction Index, and Business Regulation Index came from the Economic Freedom of the World Dataset (EFW). The EFW index broadly reflects the extent to which a country is pursuing free market principles. The index is constructed by incorporating 50 independent variables. Ownership is a measure of the stringency of restrictions on foreign ownership of domestic assets. It is obtained from the Global Competitiveness Report Questions “How prevalent is foreign ownership of companies in your country?” and “How restrictive are regulations in your country relating to international capital flow?”. A score of 10 implies that the country is not restrictive at all and a score of 1 implies that the country is highly restrictive. Regulation is an alternative to the “Ease of Doing Business Index” which had only started in 2013. This variable has six subcomponents in the EFW dataset—Administrative Requirements, Bureaucracy Costs, Starting a business, Red tape, Licensing Restrictions, and Cost of Tax Compliance. A score of 10 implies less restrictive regulations and a score of 1 implies stricter regulations are enforced. It should be noted that around 97% of the countries in the sample (with the exception of only two countries) experienced reductions in their Ownership and Regulation scores from 2000-2014. This trend suggests that most developing countries implemented more restrictive regulations and foreign ownership policies. A possible explanation for this phenomenon is that this period captures

Spring 2017 the aftermath of the 2008-2009 Global Financial Crisis. In the aftermath of the crisis, governments voluntarily enforced tighter capital controls. This limitation of the sample could affect the empirical results of the study (i.e. how they affect FDI inflows). The IPR Index used in the study is another component of EFW. This EFW component came from the Global Competitiveness Report question: “Property rights, including over financial assets, are poorly defined and not protected by law or are clearly defined and well protected by law.” A high score (close to 10) implies better property rights protection whereas a low score (close to 0) implies a weaker IPR regime. 4.2 Estimation Technique Building upon the IPR-FDI study of Adams (2010) and other various researchers on FDI, the relationship between Intellectual Property Rights and Foreign Direct Investment Inflows is an implicit function of the form:

[1] To be able to estimate the parameters of the model, the main qualitative econometric model using IPR protection index (IPRI) is:

where: i indicates the country in year t, refers to the constant term, and is the composite error term The main quantitative econometric model using per capita IPR is:

where: i indicates the country in year t, refers to the constant term, and is the composite error term The variables ownership, regulation, per capita IPR, and IPR index are included in their squared forms in order to capture any non-linear relationship with FDI inflows. Dynamic panel data estimators have been used in many studies dealing with FDIs. Examples of these estimators in-

Columbia Economics Review

clude Arellano-Bond (1991) and Arellano-Bover (1995)/Blundell-Bond (1998) where dynamic models of the first-differenced equations estimated by Generalized Method of Moments (GMM) are used. Due to the weaknesses of the Arellano-Bond (1991) estimator, ArellanoBover (1995) and Blundell-Bond (1998) developed a system of regressions in differences and levels. The lagged levels of explanatory variables are used as instruments for regressions in differences while the lagged differences of explanatory variables are used as instruments for regressions in levels. Both difference and system GMM are designed for panel data meeting the following characteristics: (1) few time periods and many observations, (2) a linear functional relationship, (3) one left-hand side variable that is dynamic, (4) independent variables are not strictly exogenous, (5) fixed individual effects, and (6) heteroscedasticity and autocorrelation within but not across individual observations (Roodman, 2009). Previous studies that use SGMM estimates observe that it is an effective method to correct for the problems of endogeneity of the regressors, measurement error, and omitted variables. This method of estimation can eliminate biases that arise from ignoring dynamic endogeneity and also accounts for simultaneity while unobservable heterogeneity (Alege & Ogundipe, 2014; Davidson & Mackinnon, 2004). GMM can be estimated using either a single step or a two-step method The two-step GMM gives a more asymptotically efficient estimate since it uses the consistent variance – covariance matrix from the first step GMM methodology. Any bias in the two-step standard errors are corrected by Windmeijer’s (2005) small sample correction incorporated in the STATA command xtabond2.The Stata syntax to be used for regressions is xtabond2. It is with these arguments that the two step SGMM is selected to be the appropriate estimation technique for this study. We transform the above equations (2) and (3) into GMM form. Transforming the main quantitative econometric equation (2) into SGMM form, we obtain:

[4] Transforming the main qualitative econometric equation (3) into SGMM


Spring 2017 form, we obtain:

[5] where Xi,t is a matrix of control variables including: GDP Growth Trade Openness Return on Investment Natural Logarithm of Population Risk (Investment Climate) Ownership and Ownership Squared Regulation and Regulation Squared Inflation Telephone lines (per 100 people), Zi,t values are time dummies, and ∈i,t is a composite error term including: ui, which is time invariant and accounts for any unobservable individual countryspecific effects not included in the regression and the stochastic term vi,t. The marginal effect of IPR on FDI determines how incremental changes in IPR regimes affect FDI inflows. Marginal effects are obtained by: [6] [7]

[8]

Obtaining the critical value of equation (7), we have

[9]

4.3 Preliminary Data Analysis Table 1 reports the summary statistics of the variables used in the study. It reports the mean, standard deviation, minimum value, and maximum value for all 79 countries within the time period. It can be observed that there are significant variations in variables such as FDI inflows, inflation, and GDP growth where the minimum values are negative and the maximum values are positive. 5. Discussion of Results The models above are estimated using dynamic panel data estimation. More specifically, two-step SGMM was used.

where IPRI and IPRPC are the median observations of the variables.. A positive marginal effect implies that an increase in the quantity or an improvement in the quality of protection of IPR increases FDI inflows and a negative marginal effect implies that an increase in the quantity or an improvement in the quality of protection of IPR decreases FDI inflows. While the quantity of IPR generally affects the quality of the IPR regime, there are other factors involved in determining the strength or weakness of IPR regimes. In other words, higher IPRPC values do not necessarily imply higher IPR index scores. A scenario where the marginal effects of equations (6) and (7) are different is possible. In the case where nonlinearities are observed, we would want to obtain the critical point of the equation. To do so, we set equations (6) and (7) to zero and solve for the variable of interest. Obtaining the critical value of equation (6), we have

Columbia Economics Review

31 In this study, SGMM is preferred and applied because (1) the mentioned authors had favorable observations in the use of SGMM in their respective FDI-related studies; (2) there is the possibility of encountering endogeneity and unobservable heterogeneity when the model is estimated under simple linear dynamic panel methods. Table 3 above presents the results for both qualitative and quantitative models. Intellectual Property Rights, the variable of interest, is significant in both quantitative and qualitative models, but its marginal effect on FDI varies. Per capita IPR, which is a proxy for the quantity of IPR of a country, has a negative effect value, which implies that countries with more patents, trademarks, and copyrights have lower FDI inflows. On the contrary, the IPR index has a positive effect, which indicates that strong IPR protections encourage the entry of foreign direct investment in the host country. Moreover, it is important to note that the regression found nonlinearities between IPR and FDI in both models. This result supports the findings of Asid (2004) and contradicts those of Adams (2010). A deeper discussion on IPR variables is conducted in the subsections below. In both models, Lagged FDI inflows, Risk, Return on Investment, and Trade Openness have positive and signifi-


32 cant effects on FDI. The risk variable is significant at the 5% (1% in the qualitative model) level, which indicates that the political and economic conditions and the overall investment climate of a country is an important determinant of FDI inflows. It should be noted that all countries in the sample are classified as developing countries, which makes an improvement in its investment climate all the more significant in attracting FDI. The Return on Investment (ROI) coefficient is positive and significant. Theoretically, ROI should be positively correlated with the marginal product of capital which is high among capital scarce developing countries, where GDP per capita is low. Therefore, the inverse of per capita GDP, which is ROI, should positively and significantly affect FDI

Spring 2017 inflows. This result is consistent with studies that are concerned with the determinants of FDI inflows such as Quazi (2007) and Asiedu (2002). Trade openness is another positive and significant variable in both models. As mentioned before, trade openness is the volume of trade as a percentage of GDP. Many developing countries in the sample geared towards export oriented industrialization strategies in the years covered by this study. More specifically, many South American countries embraced import substitution industrialization strategies until the late 1980s (Quazi, 2007). After decades of sluggish economic conditions, they began shifting their philosophy towards export oriented industrialization and its full swing effects on FDI inflows are also part of

Columbia Economics Review

what was captured in the positive and significant trade openness coefficient. Therefore, generally, countries that changed their economic philosophy and turned outwards toward globalization appeared more favorable and attractive to foreign investors. Typically, foreign investors refrain from investing in countries that have not had favorable amounts of FDI inflows in the past. Instead, they choose to invest their resources in countries that have demonstrated to be capable recipients of financial and capital investments. Companies or individuals who plan to test relatively unknown territories will stagger their investment levels until they reach the desired amount of investment (Quazi, 2007). Therefore, positive incremental lagged changes in FDI levels positively affect FDI inflows—and this is true for both quantitative and qualitative models. The coefficient of population is negative but insignificant in both models, which implies that the absolute number of people in a country does not affect FDI inflows. However, studies such as Khan’s (2010), which used population growth as a proxy for size of the labor force, showed that population growth positively and significantly affected FDI inflows. The coefficient of GDP growth is positive but only significant in the quantitative model. A possible explanation is that GDP growth rate becomes a less important determinant of FDI inflows when indices of how property rights are protected in the host country are included in the equation. This result contradicts the findings of Adams (2010) and Nunnenkamp (2003) who both used qualitative indices of IPR protection and found that GDP growth positively and significantly affected FDI inflows. The variable telephone per 100 people is insignificant in both models. This is consistent with the findings of Adams (2010). A significant number of countries in this sample are in the Sub-Saharan African region where the average telephone line subscriptions are 1.5 per 100 people. This variable is also indicative of underdeveloped technology and infrastructure in developing countries that can possibly discourage foreign investment. The coefficient of foreign ownership restriction index is negative and significant. This is counterintuitive and contradicts the theoretical models that suggest that countries where foreign ownership of companies is allowed and is more prevalent are likely to attract more FDI. This can be seen in the case of China where


Spring 2017 its gradual opening up from State Owned Enterprises (SOE) to Joint Ventures (JV) and finally to Wholly Owned Foreign Enterprises (WOFE) significantly increased its FDI inflows. A significant coefficient of Ownership suggests that it indeed is a factor considered by foreign investors. Its negative value can be explained by taking a closer look at the trend of the ownership observations. In all but 2 of

“More specifically, many South American countries embraced import substitution industrialization strategies until the late 1980s” the 79 countries in the sample (Sri Lanka and Vietnam), the scores of ownership in the first period (2000-2002) are higher in than the last period (2012-2014). Therefore, it can be inferred that the relaxation of foreign ownership environment of the countries in the sample is not captured between 2000 and 2014. Simply put, in all but 2 of the 79 countries in this study, foreign ownership of domestic companies is generally more relaxed and prevalent in 2000-2002 than in 2012-2014. This can explain why the coefficient on ownership is counterintuitive. Meanwhile, the value for the business regulation index is significant only in the qualitative model and is negative in both. This result is consistent with the findings of Quazi (2007) who found a significantly positive correlation between FDI inflows and more repressive regulation. Of the nine countries studied by Quazi, only Mexico and Argentina experienced less repressive regulations whereas the seven remaining countries in the sample are heavily regulated. An explanation that the author provides is that even the seven countries received far higher FDI inflows and that is what is captured by the estimation. Moreover, seven countries scored higher in other FDI-inducing measures which arguably could have offset the heavy regulation scores. Similar to the case of Quazi (2007), the majority of the

33

countries in the sample of this study are heavily regulated, but my findings suggest that more heavily regulated countries received more FDI inflows. In fact, less than five percent of the sample experienced relaxed business regulation over the fifteen year period. It is possible that the effect of heavy regulations is offset by other FDI inducing incentives. Furthermore, Ownership and Regulation are both found to have nonlinear effects on FDI inflows as will be discussed in detail in the following subsections. The general rule that the number of instruments must be less than or equal to the number of groups is satisfied by both models. Moreover, another important diagnostic test in Dynamic Panel Data estimation is the Arellano-Bond Test for Autocorrelation of the Residuals [AR(x)] where the null hypothesis is “no autocorrelation of order x”. The AR (1) results suggest that if the residuals are uncorrelated, their first differences are expected to be correlated so in this case, AR (2) test will better identify serial correlation of the residuals. Based on the figures reported in Table 3, both quantitative and qualitative model showed no signs of first order serial autocorrelation. Meanwhile, the Sargan and Hansen tests are indicative of the exogeneity of the instruments with the null hypothesis being: “the instruments as a group are exogenous”, which should be rejected by obtaining a high p value. In both tests, the apparent failure to reject the null hypothesis gives support to the model. An overall consideration of the Sargan and Hansen p-values suggests that the instruments are effective and valid. The coefficients for both quantitative and qualitative System GMM estimations satisfy the Fixed Effects < Generalized Method of Moments < Random Effects condition for effective GMM estimators (Blundell and Bond, 1998).

tive model builds on the model used by Adams (2010). The variable of interest, IPR index, has a positive and significant marginal effect on FDI inflows. This is the intuitively expected relationship between IPR and FDI and is consistent with the studies of Adams (2010), Smarzynska (2004), Khan & Samad (2010), Maskus (1998), and Mansfield (1995). The empirical results provide substantial evidence that improvements in the formulation and enforcement of IPR laws and the overall bureaucratic climate of IPR regimes will positively and significantly affect FDI inflows. Smarzynska (2004) suggests a positive relationship between IPR and foreign investments in a firm-specific sample from high technology sectors. This result is used to induce the argument that IPR protection benefits not just high technology sectors but also firms from other industry because IPR can be interpreted as a signal. Mansfield (1995) also reports a sector-specific study on IPR-FDI link and finds that industries such as the chemicals, pharmaceuticals, and electri-

5.1 Results Using IPR Index Following all available studies discussing the IPR-FDI link, this paper estimated the IPR-FDI relationship using the conventional approach—that is, using a qualitative index of IPR protection. Table 5 shows the estimation result of the model using IPR Index (IPRI) as a proxy for IPR. This data is a sub-component of the Economic Freedom of the World Composite Index. For comparison purposes, three different techniques—fixed effects (FE), random effects generalized least squares (GLS), and SGMM are used. The qualita-

cal industries benefit most from transfer of technology brought about by FDIs. A region-specific study is conducted by Khan and Samad (2010) who find that in 14 Asian and Southeast Asian countries, stronger IPR regimes increase the likelihood of FDI in manufacturing and retail sectors. Meanwhile, a study by Seymour (2006) uses both developed and developing countries and reports a significant positive effect of patent protection on FDI. This study uses developing countries and the empirical results are consistent with those mentioned.

Columbia Economics Review

“Of the nine countries studied by Quazi, only Mexico and Argentina experienced less repressive regulations whereas the seven remaining countries in the sample are heavily regulated.”


34 Furthermore, nonlinearity is also observed in the relationship between IPRI and FDI and can be validated by the results of the regression in Table 5 . While Adams (2010) fails to observe this nonlinear relationship, Asid (2004) suggests that the patterns of patent protection follow the “U” shape analysis but is ambiguous about the break point level of the IPR. A non-linear relationship implies that as level of protection increases, the level

“Meanwhile, the Sargan and Hansen tests are indicative of the exogeneity of the instruments with the null hypothesis being: “the instruments as a group are exogenous”, which should be rejected by obtaining a high p value.”

Spring 2017

flows. All studies available use indices such as the Ginarte-Park (GP) index and the Economic Freedom of the World— IPR protection index (EFW). This is under the assumption that if indeed IPR is a factor considered by foreign investors, what matters is how well these rights are protected rather than how much IPR applications there are. However, the quantity of IPR applications can also be indicative of institutional quality, since investors would not file applications in countries with weak and underdeveloped IPR regimes. The marginal effect per capita IPR on FDI Inflows is first presented in Table 6.

of FDI inflows would also increase after reaching a certain turning point. The coefficients of Lagged FDI and Return on Investment are consistently positive and significant in all three techniques used which suggests the importance of both variables as determinants of FDI inflows apart from strengthening IPR regimes of a country. The significance of these variables indicates that long term profitability is a critical factor for foreign investors. This model also shows that ownership and regulation have negative marginal effects on FDI inflows. This is consistent with the results of the model using per capita IPR. To reiterate, it could be argued that the sample, being limited to low and middle income countries over a span of fifteen years, fails to capture instances where ownership restrictions were relaxed and business regulations became significantly less repressive. 5.2 Results Using Per Capita IPR Of the available literature, no study has investigated the relationship between the quantity of IPR applications and FDI in-

Columbia Economics Review

The variable of interest, per capita IPR, has a negative marginal effect on FDI inflows using both median and mean. Simply put, as patents, trademark, and copyright applications in a country increase, the less FDI inflows are observed. This is true for at least half of the countries in the sample. It could be argued that the quantity of IPR registered in the countries in the sample of developing countries has not yet reached the necessary threshold to affect FDI inflows. This nonlinear relationship implies that even if the sheer quantity of IPR applications is steadily increasing, it will not translate to higher FDI inflows unless the stock of IPR appli-


Spring 2017

“This model also shows that ownership and regulation have negative marginal effects on FDI inflows.” cations reaches a certain threshold. To obtain the said threshold, equation (9) is used.

[9]

To illustrate the context, Table 9 presents the IPR per capita values in different percentiles. Table 8 shows the estimation results of the model using per capita IPR (IPRPC) as a proxy for IPR. For comparison purposes, three different techniques—fixed effects (FE), random effects generalized least squares (GLS), and SGMM are used. The coefficients of Lagged FDI and GDP growth are consistently positive and significant in all three techniques used, which suggests the importance of both variables as determinants of FDI inflows.

Ownership and Regulation both have negative marginal effects on FDI inflows. As mentioned earlier, this result is counterintuitive because as countries relax foreign ownership restrictions and ease regulation on businesses, the more attractive the country should appear to foreign investors. These negative marginal effects could have been caused by, as mentioned earlier, the lack of countries in the sample that relaxed ownership restrictions and lessened repressive regulation policies within the time covered. However, it is important to note that this nonlinear relationship suggests that at a certain point, both variables will positively and significantly affect FDI inflows. To reiterate, the GMM estimates are effective since they satisfy the “Fixed Effects < Generalized Method of Moments < Random Effects” condition for GMM estimators (Blundell and Bond, 1998). 6. Policy Implications and Concluding Remarks The empirical results of the study suggest that strengthening IPR regimes has a positive marginal effect on FDI inflows. However, the relationship of IPR and FDI is not direct. This study finds sufficient evidence that strengthening IPR alone is immaterial in increasing FDI inflows. An overall consideration of the investment climate, ownership restrictions, business regulations, and macroeconomic indicators of a country all affect the decision making process of foreign investors. As a matter of fact, analyzing from regional

35

“The quantity of IPR in the countries in the sample has not yet reached the necessary threshold to affect FDI inflows for at least half of the countries in the sample.” statistics, if strengthening IPR regimes alone can generate additional FDI, then between Middle East and North Africa (MENA) and Southeast Asia and the Pacific (SEA) the former should have received higher average FDI inflows as percent of GDP. This is because MENA has an average IPRI of 5.50, while SEA has an average IPRI of 5.01. However, based on the data, MENA countries had an average of 2.04% FDI inflows versus SEA countries which had an average of 3.27%. A possible explanation is that the Ownership and Regulation scores of SEA countries (6.3 and 6.2 respectively) are higher than the scores of MENA countries (5.6 and 5.9 respectively), and these indicators may have been given more weight by foreign investors. This finding is consistent with the experience of China where FDI inflows surged after gradually opening up their business ownership restrictions, while maintaining a weak IPR regime.

“Ownership and Regulation both have negative marginal effects on FDI inflows.” Moreover, a nonlinear relationship between per capita IPR and FDI inflows is observed and a negative marginal effect

Columbia Economics Review


36

“An overall consideration of the investment climate, ownership restrictions, business regulations, and macroeconomic indicators of a country all affect the decision making process of foreign investors.” is reported. Therefore, the quantity of IPR in the countries in the sample has not yet reached the necessary threshold to affect FDI inflows for at least half of the countries in the sample. This means that even if the sheer quantity of IPR applications is steadily increasing, it will not translate to higher FDI inflows unless the stock of IPR applications reaches a certain threshold. This threshold is found to be in between the 95th and 99th percentile observations. Therefore, instead of aiming to increase IPR applications by attracting more investors to apply for patents, trademarks, and copyrights, IPR regimes should be focused on ensuring that these applications are protected against infringement. The strategic policy direction is a simultaneous bureaucratic and institutional reform, so that government agencies do not have overlapping responsibilities and jurisdictions. FDIs will only significantly increase when red tape is eliminated, business set up is simplified, and government intervention is minimized. Apart from these determinants, this

Spring 2017 study suggests that stronger IPR regimes are additional incentives for foreign investors. To strengthen IPR regimes, this study suggests a two-pronged solution: reach the threshold and improve protection. Improving protection without incentivizing individuals and firms to register for intellectual property will not positively affect FDI inflows, as shown in this study. On the other hand, simply reaching the minimum stock of IPR would not translate to higher FDI inflows if not accompanied by improvements in institutions that protect these property rights. Note that the minimum stock of IPR computed in this study will be an ever changing indicator depending on developments in IPR regimes and population growth rates. Further researches should be devoted to looking at industryspecific and firm-specific data (i.e. how IPR quantity and protection quality affect investments in that specific industry). It should be emphasized that this study did not aim to establish the effectiveness of the 1995 TRIPS agreement in IPR regimes since the time period covered are all in the post-TRIPS era. Moreover, the 79 countries are limited to low and middle income countries with average per capita real GNI of not more than $12,475 and the data gathered covered a total of 15 years. This sample limit could have contributed

“[B]ased on the data, MENA countries had an average of 2.04% FDI inflows versus SEA countries which had an average of 3.27%.”

Columbia Economics Review

“[P]olicymaker should keep in mind that encouraging FDI is not the end goal of policy. FDI is a tool to generate jobs, make use of idle resources, and benefit from the transfer of more efficient technology.” to the counterintuitive results on ownership and regulation variables. Finally, policymaker should keep in mind that encouraging FDI is not the end goal of policy. FDI is a tool to generate jobs, make use of idle resources, and benefit from the transfer of more efficient technology. All these contribute to economic development. This research provides sufficient empirical evidence to claim that, if IPR regimes are strengthened and overall economic climate is improved, developing countries will ultimately benefit from FDI. Meanwhile, the Sargan and Hansen tests are indicative of the exogeneity of the instruments with the null hypothesis being: “the instruments as a group are exogenous”, which should be rejected by obtaining a high p value. In both tests, the apparent failure to reject the null hypothesis gives support to the model. An overall consideration of the Sargan and Hansen p-values suggests that the instruments are effective and valid. n


Spring 2017

Columbia Economics Review

37


38

Spring 2017

My Pay or the Highway Payment Schemes in Online Marketplaces: How Do Freelancers Respond to Monetary Incentives? Justine Moore Stanford University There has been a change in the concept of the ideal American worker; one of the disruptors has been the idea of freelancing: the concept where people work for different companies at different times rather than being permanently employed by one company. The author’s paper was published in this edition of the Columbia Economics Review because it is a pioneering work that explores the rise of online labor marketplaces, specifically how incentive schemes can prove effective for freelancer productivity. The paper provides valuable insight into a segment of the labor market that not only has been relatively unexplored but also one that is growing at a brisk rate. The author’s conclusion has wide ranging implications for worker effectiveness, labor policy, and the metrics that govern the online labor marketplace. -R.S.

I. Introduction Several secular trends have contributed to an exponential growth in the online freelancing marketplace in the past several years. As Internet security allows for safer online transactions and new software creates an opportunity for employers to effectively monitor remote employees, many companies have become more comfortable with conducting business online. In addition, the proliferation of Internet users in emerging economies has allowed employees and employers in these countries to join online labor marketplaces that they previously could not access. The entrance of these employees has been crucial in encouraging employers in developed countries to participate in online marketplaces, as outsourcing tasks to developing nations allows employers to reduce their costs by hiring freelancers willing to accept low wages. There are also significant benefits for the freelancers, as working online provides flexibility that traditional jobs typically don’t allow—online freelancers can work

anywhere with an Internet connection and can often be employed in multiple jobs at a time. Freelancers in developing nations may be able to earn higher wages online than in the traditional labor market in their area, and freelancers in developed

“Although online freelancing is becoming increasingly popular for the reasons mentioned above, employers struggle with the inability to directly monitor their freelancers.” Columbia Economics Review

nations can supplement their income or even live off of their freelancing if they have specialized and highly desired skills. Although online freelancing is becoming increasingly popular for the reasons mentioned above, employers struggle with the inability to directly monitor their freelancers. Freelancers may be working halfway around the world while their employer is sleeping, which makes it difficult for employers to track their freelancers’ productivity. In this type of marketplace where close monitoring is impossible, incentives are crucial in aligning the freelancers’ interests with those of the employer by motivating freelancers to spend their time productively. Few studies have examined how to best incentivize freelancers in online marketplaces, and to the best of my knowledge, none have explored how the optimal incentive scheme may differ between freelancers with different measurable characteristics such as level of work quality, as reflected in their job success scores. Therefore, since many employers are somewhat arbitrarily selecting payment schemes for their


Spring 2017 employees without consideration of the potential impact on work quantity and quality, the online freelancing market is likely operating inefficiently. With more effective and individualized incentive schemes, employers could extract greater productivity from their freelancers without necessarily having to pay them more. I conduct an experiment on Upwork, the world’s largest online freelancing marketplace, to examine how freelancers respond to a monetary incentive. I hire freelancers to work on a simple data extraction task under one of two incentive schemes: a basic hourly wage, and an hourly wage with a piece rate bonus based on performance. The piece rate bonus was designed with the intent that the average freelancer in this incentive group would receive the same hourly wage as the freelancers in the basic hourly wage group. The freelancers are randomly assigned to one of the two incentive schemes. Both groups undergo a 30-minute training session to learn how to successfully complete the task, and then spend an hour and a half working, with their submissions recorded in individual Google forms. I then analyze the freelancers’ performance to determine whether one incentive scheme is superior in motivating freelancers to increase the quantity and quality of their work. I find that the incentive does not have a significant effect on any of our measures of performance for the group of freelancers as a whole. However, when I analyze the interaction between the incentive schemes and a freelancer’s measurable characteristics, I find that the incentive does influence freelancer behavior in specific sub-groups of freelancers. When freelancers with high job success scores are offered an incentive, their output declines, and their accuracy improves only slightly. The opposite occurs when freelancers with low job success scores are offered an incentive—both their output and the quality of their work improve significantly. II. Literature Review My paper builds upon prior research on both the role of reputation in online marketplaces and how various wage systems and other incentives affect work product in traditional and online marketplaces. This literature review examines both types of research and discusses how findings from prior studies are relevant to my paper.

Reputation in Online Marketplaces Most studies on the role of reputation in online marketplaces use data from eBay, as it was one of the first e-commerce sites and it stores data from millions of transactions. In an early study using eBay data, Standifird (2001) examines the impact of a seller’s positive and negative reputational ratings on the final bid price for an item, and finds that a strong positive reputation can drive up prices after the seller exceeds a certain threshold of positive comments. However, he also finds that negative ratings are significantly more influential than positive ones in determining the final price of the item. Several other studies (e.g. Houser, 2006; Wolf, 2005; Melnik, 2002) confirm the finding that sellers with good reputations receive price premiums in eBay auctions, though Jin (2006) finds that reputation is positively correlated with an increased number of bids but not with a higher price. Livingston (2002) also concludes that sellers with positive reputations are rewarded with higher prices, though he finds that the marginal return to positive

“Pallais [...] finds that workers in the uninformative comment group were significantly more likely than the workers in the control group to find subsequent work” feedback is decreasing. Cabral (2010) focuses exclusively on the effect of negative ratings, and finds that a seller’s weekly sales growth rate drops from positive to negative after he receives his first negative feedback. Subsequent negative feedback arrives much more quickly, but doesn’t have nearly the same impact on the sales growth rate. Cabral suggests that sellers change their behavior after receiving their first negative review, as they may be discouraged and put less effort into providing superior service for their customers, which Columbia Economics Review

39 increases the probability of receiving subsequent negative feedback. Whether or not this also occurs with online freelancers when they receive poor feedback from an employer has not been explored. In some instances, the authors participate in the online marketplace they are studying to collect their own data on transactions, which is similar to what I do on Upwork. Resnick et al. (2006) sold matching items using both the account of an experienced eBay seller and new accounts (with no reputation scores or feedback), and find that buyers paid more to purchase the postcards from the established seller. The authors then left negative feedback on the new seller accounts before selling more items. Surprisingly, this feedback had no effect on the price the buyers were willing to pay for the postcards, even though the negative reviews constituted a significant amount of the feedback that the sellers had received. The authors therefore hypothesize that buyers may treat all new sellers as untrustworthy, regardless of whether or not they have negative feedback. Jin (2006) also extends the typical data collection study by purchasing baseball cards from eBay sellers and hiring professionals to determine their true value. He finds that sellers who claim they have a high quality card receive a price premium, but when these “high quality” cards are evaluated by professionals, they are no better than other cards without this label. Sellers with good reputations are less likely to claim that their cards are high quality, and are also less likely to send counterfeit cards or simply default on the sale and send no card. However, they are not any more likely to send high quality cards. This suggests that a seller’s reputation may be more indicative of how likely they are to “cheat” by sending a fake item or no item at all than of the quality of their items. Most directly relevant to my experiment is a study by Pallais (2014), who hired workers to complete a task on Upwork. When the task was complete, Pallais posted either an “uninformative” comment or a detailed comment with objective information about the worker’s performance on each worker’s profile. Pallais then tracked the subsequent employment outcomes of workers in all three groups—the two treatment groups and the control group of workers she did not hire. She finds that workers in the uninformative comment group were significantly more likely than the workers in the control group to find subsequent work, and also requested higher wages and had


40 higher earnings from future employers. Workers in the detailed feedback group were even more likely to be employed, requested even higher wages, and had even higher earnings. This suggests that employers believe that a job success score is a legitimate signal of a worker’s quality—they are more willing to hire freelancers with more positive feedback, and are also more willing to pay them higher wages.While these studies come to different conclusions about the relative value of various types of feedback, they all conclude that counterparties in online transactions use reputation systems to make decisions about what to purchase (or who to hire), and how much they are willing to pay. This indicates that these reputation systems provide some value in revealing information about certain qualities of a user, though there is debate about exactly what these qualities are. Therefore, we can expect that a freelancer’s job success score on Upwork will be useful in revealing something about the freelancer, whether it is the quality of their work, the amount of work they are able to accomplish in a specific amount of time, or another characteristic that employers value. As a result, we might expect to see that a freelancer’s job success score does provide valuable information about how a freelancer will respond to incentives. Wage Systems and Incentives Much of the literature regarding the effects of financial incentives on employee performance builds upon the seminal framework developed by Holmstrom and Milgrom (1991). Their model assumes that employees have either more than one task to complete or that there are multiple elements to one task, and therefore incentives influence not only how much effort employees exert but also how they allocate time among various responsibilities. According to this model, performance-based incentives may not always be effective, particularly when performance is easily measured for one task but not for another. The authors give an example of workers producing machines. Since quantity is more easily measured than the quality of output for this task, a piece rate bonus based on output may encourage workers to produce more at the expense of quality. The model suggests that when it becomes more difficult to measure performance in competing activities, it is less desirable to provide incentives for the activity where performance is easily measured,

Spring 2017 as workers will neglect the other activity. Holmstrom and Milgrom conclude that for some tasks, a fixed wage independent of performance is the optimal incentive scheme.

“we might expect to see that a freelancer’s job success score does provide valuable information about how a freelancer will respond to incentives. Dana (1993) explores the question of whether an hourly wage, fixed fee, or contingent fee is the optimal compensation scheme for attorneys, and concludes that the contingent fee system (in which the attorney receives a percentage of the money awarded to the client) is optimal. Contingent fees serve as a performancebased incentive, because the attorney receives nothing if the case is lost and a fixed percentage of the award if the case is won. Therefore, this system aligns the attorney’s incentives with the incentives of his or her client—the attorney is motivated to win the case, not to charge as many hours as they can (which an hourly wage incentivizes them to do) or to spend as little time on the case as they can get away with (which a fixed fee incentivizes them to do). Ritter and Taylor (1999) have a more pessimistic view of performance-based incentives. Their model suggests that low productivity workers become discouraged (and therefore perform worse) when they see high productivity workers, who typically respond dramatically to these incentives, doing significantly better. The authors use this model to predict that while a firm can increase profits by implementing an incentive scheme by which workers compete for performancebased incentives, it is de-motivating to low productivity workers. This model is particularly interesting because it suggests that there may be differing responses to incentives between different types of workers, which I will be examining in my experiment. Columbia Economics Review

Shearer (2004) and Lazear (1996) both analyze data from companies that switched from a fixed wage compensation system (where workers were paid a set amount to complete a job) to a piece rate compensation system (where workers were paid per unit of output). They both find that worker productivity increased by 20-30% after the switch, despite the fact that the workers were paid less per unit of output under the piece rate system. Lazear notes that some of this increase in output is due to a change in the composition of the workforce, as the piece rate wage system attracts higher quality workers and decreases turnover amongst the high quality workers who are already employed at the company. Lazear also finds that there is more variance in the productivity of workers under the piece rate system, as high ability workers are incentivized to produce more and have the capacity to do so. This suggests that in my experiment, if we assume that high ability freelancers have high job success scores, these freelancers may react more strongly to the incentive than those with lower reputation scores. Sauermann (2014) examines the impact of performance-based incentives using data from a company that switched from an hourly wage scheme to compensation based on quality of work. He finds that after the switch, the average worker’s performance improved, but the improvement was three times larger for low ability workers than for average ability workers. For high ability workers, performancebased pay had a negative effect—these workers had no reason to respond to the incentive because their performance under the hourly system would have already qualified them for the highest level of performance pay. Interestingly, Sauermann finds that quantity of work did not drop after the performance-based pay was implemented, which seems to contradict Holmstrom and Milgrom’s model. Regardless, this result suggests that workers with low reputation scores may have the most significant response to a financial incentive. Shi (2010) and Heywood et. al (2013) conducted experiments on how workers react to a switch from an hourly wage to a piece rate wage that compensates workers per unit of output. In both experiments, the quantity of output increased significantly after the implementation of the piece rate system, and survey results indicate that high quality workers significantly preferred (and benefitted the most from) the switch. Shi finds that the quality of the work doesn’t change under the


Spring 2017

“We may expect to see that workers with high job success scores have the most significant response to the performancebased incentive.” piece rate system, while Heywood et. al find that quality improves for workers who are closely monitored and declines for workers who are not closely monitored. Therefore, we may expect to see that workers with high job success scores have the most significant response to the performance-based incentive. Given that Upwork freelancers are not particularly closely monitored, we may also expect to see an increase in quantity of output but a decrease in quality. Gneezy and Rustchini (2000) conducted an experiment to determine the effect of incentives on the results of an IQ test. All of the participants were randomly sorted into one of four incentive groups, each of which was awarded a different piece rate bonus for each question answered correctly. The authors find that participants in the two highest piece rate groups scored significantly better on the IQ test than participants in the lowest piece rate group and the group that received no incentive. When they sort participants by ability, they find that this holds true for all subgroups of participants, and that the change in performance motivated by the incentives is similar for each subgroup. If this finding holds true for my experiment, I would expect to see that a worker’s job success score does not influence his or her response to incentives. My paper also builds off findings from several studies focusing exclusively on incentives in Mechanical Turk, an online freelancing marketplace run by Amazon that typically hosts shorter-term jobs than those on Upwork. On Mechanical Turk, workers are often hired on a piece rate basis to complete small portions of a larger task. Early papers focusing on the effect of incentives on the performance of Mechanical Turk workers by Mason and Watts (2009) and Horton and Chilton (2010) suggest that the magnitude of a financial incentive has a significant effect on the quantity of output, though not

necessarily on the quality. Both papers use data from experiments in which Mechanical Turk workers were randomly assigned to one of two piece rates per unit of output, and find that workers receiving a higher piece rate payment are willing to continue the task for significantly longer. Rogstadius et. al (2011), Yin et. al (2013), and Harris (2011) all expand upon this work by adding to the complexity of the basic experiment. Rogstadius et. al (2011) explore the effects of both intrinsic and extrinsic incentives, and confirm the finding that higher pay increases the quantity of work produced but does not affect the quality. However, they find that quality of work improves when workers believe that their work product is benefiting a charity. Yin et. al (2013) find that performance-contingent financial rewards had no effect on either the quantity or quality of work, contrary to the results of previous studies, but that changing the amount of the reward did have a significant effect on work quality and quantity—if a worker’s bonus increased, he or she completed more tasks with improved accuracy. Therefore, the authors conclude that there is a powerful anchoring effect, as workers use their first payment to form their perception of a fair wage for the job, and respond accordingly when their wage increases or decreases. Harris (2011) explored the effect of both positive and negative incentives on the performance of Mechanical Turk workers. He randomly sorted freelancers into one of four incentive schemes: baseline (no performance-based incentive), positive (base rate plus bonuses for accuracy), negative (base rate minus penalties for inaccuracy), and combined (base rate plus bonuses for accuracy and minus penalties for inaccuracies). He found that the freelancers working under the positive, negative, and combined incentive schemes produced significantly more accurate work than those working under the baseline incentive scheme. The positive incentive group had the best performance, followed closely by the combined incentive group and then the negative incentive group. In my experiment, I attempt to determine how conventional wisdom on the power of incentives applies to online freelancers with different characteristics, particularly job success scores. Some studies suggest that a freelancer’s job success score will influence his or her response to incentives, while others suggest that job success score and incentives should not interact. Intuitively, it makes sense that if Columbia Economics Review

41 a high ability worker has to exert a minimum amount of effort to keep his or her job in an hourly wage system, he or she will have greater capacity for improvement than a low ability worker when an incentive is introduced. However, this does not always hold true in experiments, which suggests that there is more work to be done in this field. In addition, my paper is one of the first to use data from a new online marketplace—Upwork—which typically hosts longer-term jobs than Mechanical Turk. Upwork’s job postings therefore more closely resemble jobs in the traditional labor market.

The experiment was designed so that all freelancers had exactly the same experience with the task (with the exception of the financial incentive) to ensure that differences in performance could be attributed to the incentive. III. Study Design I conducted an experiment on Upwork to explore the questions of how a financial incentive influences a freelancer’s performance and how the effect of this incentive differs between freelancers. The experiment was designed so that all freelancers had exactly the same experience with the task (with the exception of the financial incentive) to ensure that differences in performance could be attributed to the incentive. The freelancers were hired on a first come, first served basis (with the exception of candidates without a job success score, who were automatically disqualified) until I exhausted my budget, and the freelancers were then randomly sorted into one of two incentive groups.They also participated in a training session before I started tracking performance, to ensure that they were


42 starting on a relatively leveled playing field with regard to their understanding of how to complete the task. However, the task was specifically designed to not require any advanced skills—freelancers were only asked to be able to read and understand English, and extract data from a file. To maintain the integrity of the experiment, it was important that the freelancers believed that they were working on a typical Upwork job, not participating in an experiment. As I have been hiring

Spring 2017 freelancers on Upwork for the past two years, mostly for real tasks related to data extraction and processing, my account looked legitimate—I had posted eight jobs and had 32 reviews from freelancers that I had previously hired. I made sure that my job posting looked similar to those for regular data extraction tasks and that I was not doing anything out of the ordinary that might make freelancers suspicious (e.g.paying a rate that was significantly lower or higher than the average, or making freelancers sign a release

Columbia Economics Review

form to participate). As far as I know, none of my freelancers suspected that they were participating in an experiment. Introduction to Upwork Upwork (formerly known as oDesk), had nine million registered freelancers as of May 2015, when the company last reported growth metrics. At this time, four million clients (employers) were registered on the site, three million jobs were posted annually,and over $1 billion


Spring 2017 in transactions were taking place on Upwork every year. Upwork connects clients with freelancers who work in a wide variety of fields, from graphic design to electrical engineering. The most popular category is administrative support,which had more than 625,000 freelancers as of February 2015. Freelancers apply for jobs by submitting their Upwork profile, a bid for the job,and any other application materials requested by the client (typically a short statement about why they are interested in the job). After completing the job, which can vary in length from a few hours to over a year, the freelancer receives a reputation score from the client that reflects his or her performance in a variety of fields, including work quality,communication, and timeliness. When a freelancer has been a member of Upwork for at least three months and completed at least four jobs for three unique clients, these individual reputation scores begin to count towards an aggregate job success score, which reflects the percentage of a freelancer’s jobs that resulted in a “great client experience”. As Upwork’s job success score formula is proprietary, the company does not reveal exactly how it is calculated to either freelancers or clients. However, Upwork’s public guide on job success scores suggests that at a high level, freelancers and clients should view the score in the following way: (successful contract outcomes – negative contract outcomes) / total outcomes. The actual calculation is significantly more complex than this, and takes into account other factors such as the length of each relationship, the reputation of the client, and the number of relationships that end without any activity or payment. Data Collection I collected my data by creating an Upwork client account and hiring freelancers to work on a data extraction task with a wage of $3/hour. Each freelancer was randomly assigned to work under one of two incentive schemes. The first, Incentive Scheme A, was a simple hourly wage ($3/hour) with a bonus unrelated to performance ($1/hour), for a total hourly wage of $4/hour. The second, Incentive Scheme B, was a base wage of$3/hour with an opportunity to earn a $0.15/bio bonus for each bio that the freelancer processed accurately beyond the base rate of ten accurate bios per hour. For example, if a freelancer working under Incentive Scheme B processed 15 bios accurately in an hour,they would receive $3.75. I

intended for the average hourly wage earned to be approximately $4/hour in both groups to prevent any differences in performance that could result from the freelancers being paid different amounts. The pilot experiment and my past experience with hiring on Upwork suggested that a $0.15/bio bonus with a ten bio per hour base rate should have resulted in the average freelancer working under Incentive Scheme B receiving approximately $4/hour. However, I found that the freelancers in this experiment underperformed expectations and were paid an average bonus of $0.27/hour, making their average total wage $3.27/hour. I hired a total of 62 freelancers (31 in each incentive group), and 59 completed thetask. The three freelancers who did not complete the task were working under Incentive Scheme B, so the final results are from 31 freelancers working under Incentive Scheme A and 28 freelancers working under Incentive Scheme B. Before beginning work, all of the freelancers spent half an hour in a training session to learn how to complete the task, and were paid a base rate of $3/hr during this time. After completing training, each freelancer worked for an hour and a half under their incentive scheme. The assigned task was pulling educational information (e.g. college degree, college name, year degree was received) out of biographies of hedge fund managers, and every freelancer received the same set of biographies to ensure that differences in performance could not be attributed to differences in the task. This task required very few skills other than basic reading comprehension and the ability to submit a Google form. Freelancers were not asked to do any further research on the hedge fund managers or consult any additional resources other than the biographies that were provided to them. All of the freelancers also received a detailed set of instructions and a list of common mistakes to avoid, which I compiled from errors that other freelancers made during the pilot experiment. I tracked freelancer performance by having them each use their own Google form to submit the biographical information. This form, along with Upwork’s time tracker,allowed me to see how long it took each freelancer to process each set of biographies.The tracker told me how much time the freelancer logged to process a given set of files, and provided screenshots of the freelancer’s screen every ten minutes. The Google form allowed me to see when the freelancer submitted each entry, down to the exact second of Columbia Economics Review

43 submission. I also automated the process of checking each freelancer’s submissions for accuracy by using an Excel formula to compare their submissions with an answer key. Accuracy was measured through two metrics: the number of bios processed accurately per hour, and the number of accuracy points a freelancer received out of the total. For a bio to count as “accurate,” a freelancer must have filled in every field of the Google form correctly. I was concerned that this measure might yield extremely poor accuracy scores for freelancers who often made small mistakes on one field of the form(such as spelling the university name incorrectly) but had otherwise perfect submissions.Therefore, I calculated a second measure of a freelancer’s accuracy score: the number of fields that he or she filled in correctly out of the total number of fields. For example, if a freelancer correctly identified the manager’s university name and degree type but made a typo in the “year received” field, he or she still received two out of three accuracy points for that biography. Do incentives matter? After confirming that the two groups of freelancers are sufficiently similar in all measurable characteristics, I run several regressions to answer my first question: How do incentives affect performance in terms of both quantity and quality of work? I regress my outcome variables against the treatment dummy and my controls:

I(treatment) is a dummy variable that equals 0 if the freelancer is working under Incentive Scheme A (basic hourly wage) and equals 1 if the freelancer is working under Incentive Scheme B (hourly wage with the potential to earn a bonus). Bios represents the number of bios that the freelancer processed during the experiment (excluding bios processed during training), so 1 is indicative of the effect of the incentive on quantity of output. AccurateBios represents the number of bios processed with 100% accuracy, so


44

Spring 2017 dium (87-94%), and low (60-86%) (Table 4.2). I then run the same regressions as above using the new bins in an attempt to compare the interactions between job success categories and the incentive using a more accurate baseline How do incentives influence payment?

1 is a measure of the effect of the incentive on both quantity and quality of output. PercentCorrect is another measure of accuracy—it is calculated by taking the points that the freelancer received divided by the total number of points possible for the bios that the freelancer processed. Therefore, 1 is a measure of the effect of the incentive on quality of output. Do different people respond to incentives differently? I then try to answer my next question— do incentives affect different freelancers in different ways? Here, I can quantify differences in freelancers as the differences in observable characteristics pulled from each freelancer’s profile (job success score, jobs worked, and preferred hourly wage).I add interaction variables between the incentive and the various observable characteristics to my regressions, and again regress these variables against my three outcome variables (bios, correct bios, and percent correct). I will use “Outcome Variable”to represent these three outcome variables from now on:

I run an additional set of regressions for job success score, as Upwork provides guidelines regarding different categories

of job success scores. According to Upwork’s website, any score at or above 90% is “excellent”, while a score at or below 75% could result in the freelancer struggling to win new clients. Therefore, I’ve bucketed the freelancers into three different bins by job success score: high (90-100%), medium (76-89%), and low (60-75%). None of the freelancers I hired had a job success score below 60% (Table 4.1). I then regress the outcome variables on interactions between these bins and the incentive and my control variables to determine if the effect of the incentive differs by bin. Here, I leave out a dummy for medium job success score (JSS) under Incentive Scheme A so that it serves as the baseline.

Finally, I attempt to determine how the magnitude of payment differs between freelancers in both incentive groups, in order to examine how the incentive influences payment. Table 4.3 below contains information regarding the total amount paid to freelancers in each incentive group, the cost per freelancer, and productivity statistics involving payment (bios processed per dollar and correct bios processed per dollar). As the table illustrates, the average freelancer working under Incentive Scheme A received a total of $1.14 more than the average freelancer working under Incentive Scheme B. As a result,the average freelancer working under Incentive Scheme B processed more bios (and more correct bios) per dollar spent. I regress the treatment dummy against three variables relating to payment: hourly wage (the amount that the freelancer was actually paid, not their preferred hourly wage), bios processed per dollar spent, and correct bios per dollar spent to determine if these variables differ significantly between the incentive groups.

IV. Data

After running the regressions using the bins provided by Upwork, I reclassify the bins in an effort to have approximately equal numbers of freelancers in each bin. Using Upwork’s classification of a high job success score, more than half of freelancers fall in the high job success score category. Therefore, I suspect that the medium job success score group may not be an accurate baseline, as all of the freelancers in that group have job success scores below the median. The new bins are the following: high (95-100%),meColumbia Economics Review

As I was able to conduct my own experiment, the data I have collected is relatively close to what I would consider the ideal dataset to study the question of how incentives and job success score interact to influence freelancer performance. In an ideal world, I could have controlled the outside factors in each freelancer’s life, such as how many other jobs they held while completing my task, to ensure that any differences in performance could not be attributed to those factors. This became particularly relevant when flooding in Chennai, India in December 2015 caused several freelancers to lose In-


Spring 2017

45 reasons, including the flooding in India. The first freelancer completed the experiment on November 20, 2015, and the last freelancer completed the experiment on December 21, 2015. It is important to note that the majority of freelancers live in India (27.59%), followed by Bangladesh (22.41%), and Philippines (20.69%). The average number of hours worked by each freelancer prior to being hired for my task was 1976.33. Approximately 37% of the freelancers had worked more than 1000 hours on other Upwork jobs, and 56% had worked more thn 500 hours on other Upwork jobs. Only 29% of the freelancers had worked fewer than 100 hours on Upwork, and nearly a fourth of these freelancers had only worked fixed wage jobs before (and therefore had no record of hours worked). In addition, 73% of freelancers had previously worked at least 10 jobs on Upwork, and the average number of jobs worked before being hired for my task was 56.20. Therefore, the majority of the freelancers were relatively experienced, having worked hundreds (or even thousands) of hours on tens (or even hundreds) of jobs. V. Results and Discussion Do incentives matter?

ternet connection for over a week, which delayed their completion of the task. In addition, my sample size is relatively small (59 freelancers completed the experiment)due to budgetary limits. With a larger budget, I would have been able to hire significantly more freelancers, and an increased sample size would have likely allowed me to draw more definitive conclusions from my results. As previously mentioned, I was careful to ensure that all freelancers had the same experience during the hiring and training process to prevent any differences in performance that could be attributed to differences in onboarding. All freelancers saw the same job posting, which advertised a data extraction task that paid $3.00/hour.Freelancers were hired on a first-come, first-serve basis between November 18, 2015 and December 1, 2015, with the exception of freelancers with no job success score, who were not hired. All freelancers received the same scripted message with instructions for the training period, as well as a link to the personal Google form they used to submit their work. After the 30-minute training period, I gave each freelancer feedback on their work. All of the freelancers received between two and four (depending on the

number of errors in the training submissions) pieces of feedback taken from a repository of feedback generated based on common mistakes in the pilot version of the experiment. For example, a common piece of feedback was “Make sure not to leave any blank spaces before or after anything you type into a field—the software that checks your responses will mark that as incorrect.” The end of each message varied by incentive group.Freelancers working under Incentive Scheme A were informed that their wage was being increased to $4/hour, and freelancers working under Incentive Scheme B were informed that they had the chance to earn a bonus and were given an example of how the bonus system worked. A total of 59 freelancers (out of the 62 we originally hired) completed the experiment. All three of the freelancers who quit were working under Incentive Scheme B—one quit before training began with no explanation, and two quit after completing training (one stopped responding to emails, and the other quit because he wanted a job where he could more quickly log a significant number of hours). Most freelancers completed both rounds of the experiment within a week, but a few freelancers were delayed for various

Columbia Economics Review

First, I confirm that the randomization was successful by testing the difference in means between the groups for all measurable characteristics using standard t-tests (Table 5.1). I find that for all three of the measurable characteristics pulled from each freelancer’s Upwork profile (job success score, jobs worked, and preferred hourly wage), the freelancers in Incentive Group A and Incentive Group B were not significantly different. Now that I have confirmed that the controls do not differ significantly between the incentive groups, I regress the outcome variables against the incentive dummy alone (Table 5.2). I find that the incentive has a positive effect on all of the outcome variables—the number of bios completed increases by 1.3, the number of correct bios increases by 0.7, and percent correct increases by 4.7 percentage points—but none of these effects are statistically significant. This suggests that overall, the incentive we provided did not motivate the freelancers to significantly improve their performance. Though this result may seem somewhat surprising, as previous studies suggest that monetary incentives are usually successful in motivating workers to increase the quantity (if not also the qual-


46

ity) of tasks completed, there are several reasons that could explain why our experiment yielded different results. A possible explanation is that the incentive had opposite effects on sub-groups of freelancers within the incentive group, and that these effects cancelled each other out. If this occurred, the overall effect might be zero even if certain groups of freelancers (e.g. those with particularly high or low reputation scores) had a significant response to the incentive. We will further examine this explanation when attempting to answer the question of whether or not the incentive affected different groups of freelancers differently. It is also possible that another incentive was at work—the desire to have a high job success score. The desire to boost job success scores may have affected our experiment’s results, since freelancers would already be exerting maximum effort. If that were the case, we would expect to see no improvement in performance when freelancers were offered an incentive, as the freelancers would already be operating at maximum effort and would have no capacity to improve. How do incentives affect different types of freelancers differently? Next, I use the observable characteris-

Spring 2017

tics gleaned from each freelancer’s Upwork profile to determine whether the incentive affected different freelancers differently. I regress each of the outcome variables against the treatment dummy, the controls (job success score, jobs worked, and preferred hourly wage), and interactions between each treatment dummy and each of the controls (Table 5.3).

“It is also possible that another incentive was at work—the desire to have a high job success score.” I find that in this regression, three variables have a statistically significant effect on the number of bios completed—Job Success Score (p-value of 0.022), Incentive (p-value of 0.058), and the interaction between Job Success Score and Incentive (p-value of 0.015). Both Job Success Score and Incentive have a positive effect, and the interaction has a negative effect.

Columbia Economics Review

None of the controls or interactions had a significant effect on any of the other outcome variables, though the interaction between incentive and jobs completed has a relatively strong negative effect on the number of correct bios (decrease of 35.51). Therefore, we can conclude that out of all of the quantifiable characteristics we tested, only a freelancer’s job success score has a significant effect on his or her response to an incentive. While Job Success Score and Incentive both have a positive effect on the number of bios completed, the interaction between the two has a significantly negative effect,which is somewhat puzzling. This suggests that offering freelancers an incentive has a large positive effect on freelancers with low job success scores, and an even larger negative effect on freelancers with high job success scores. This lends credence to my theory that the overall effect of the incentive is negligible because opposite effects on different subgroups of freelancers cancel each other out. However, it is still somewhat surprising that the incentive has a significant negative effect on bios completed for freelancers with high success scores. This could be because freelancers with high job success scores focus on the fact that they need to process the bios accu-


Spring 2017

rately to receive a bonus (and therefore their speed decreases significantly). It is also possible that the bonus is less valuable to freelancers with high job success scores. Because these freelancers likely have more opportunities to earn higher salaries in other jobs, they might prefer to multitask and do work for other jobs instead of work harder an individual task

to earn the bonus. Regressions with Upwork Bins I then run regressions using the dummy variables for the bins of freelancers generated from Upwork’s definitions of high, medium, and low job success scores. I regress my outcome variables Columbia Economics Review

47 against my controls and an interaction variable between each bin and each incentive scheme (with the exception of Medium Job Success Score/Incentive Scheme A, which serves as my baseline). I find that compared to this baseline, the High Job Success Score/Incentive Scheme A group processed an average of 10 more bios, and the Medium Job Success Score/Incentive Scheme B and Low Job Success Score/Incentive Scheme B groups each processed an average of 11 more bios. The High Job Success Score/ Incentive Scheme Band Low Job Success Score/Incentive Scheme A groups processed slightly fewer ads than the baseline (0.5 and 2 fewer, respectively). However, the difference in the bin/incentive combinations from the baseline is not statistically significant. Compared to the baseline, all of the groups outperformed in terms of number of correct bios processed. The High Job Success Score/Incentive Scheme A group and the Low Job Success Score/ Incentive Scheme A group processed an average of 6.8 and 8 more bios, respectively, though these differences were not statistically significant. The Low Job Success Score/Incentive Scheme B group processed significantly more correct bios (average of 16.23, with a p-value of 0.003) than the baseline. All of the groups also outperformed the baseline in terms of percent correct, and all of the differences (with the exception of Medium Job Success Score/ Incentive SchemeS) were statistically significant. In descending order, the average percent correct was higher than the baseline for Low Job Success Score/Incentive Scheme B, Low Job Success Score/Incentive Scheme A, High Job Success Score/ Incentive Scheme B, and High Job Success Score/Incentive Scheme A (Table 5.4). The fact that the Low Job Success Score/ Incentive Scheme B group performed significantly better than the baseline both in terms of correct bios and percent correct suggests that workers with low job success scores do respond to the incentive by improving the quality of their performance. We can conclude that for this experiment, the incentive motivated workers in the Low Job Success Score group to outperform workers with a higher job success score by a statistically significant amount in terms of both the number of bios they processed correctly and the overall percent correct. This suggests that workers with low job success scores may have the most capacity for improvement when offered an incentive, a finding that contradicts Lazear’s conclusion that high


48 ability workers have the most capacity to improve. All of the other groups (with the exception of Medium Job Success Score/Incentive Scheme B) outperformed the baseline in terms of number of correct biosand percent correct. Therefore, this data could suggest that the freelancers with a job success score in the medium range may underperform compared to freelancers with higher or lower job success scores in terms of both of these outcome variables, even when offered an incentive. However, it could also suggest that the Medium Job Success Score group doesn’t serve as a true baseline, which is one of the reasons why I then re-categorize the freelancers into modified bins. I raise the minimum to qualify for the“High” bin to 95% from 90%, and also raise the minimum to qualify for the “Medium”bin to 87% from 76%. In comparison to the old bins, the new “High” and “Medium” bins cover a narrower distribution of job success scores, while the “Low” bin covers a much broader distribution. Regressions with Modified Bins When I run the same regressions using my modified bins (in an attempt to establish a more accurate baseline), I find that for each of the groups, the difference from the baseline in terms of number of bios processed is not statistically significant, though the High Job Success Score/ Incentive Scheme A group processed an average of 9.2 more bios, and the High Job Success Score/Incentive Scheme B group processed an average of 4.1 fewer bios. Interestingly, the trend is flipped for the low job success score groups—the Low Job Success Score/Incentive Scheme A group processed an average of 6.8 fewer bios, while the Low Job Success Score/ Incentive Scheme B group processed an average of 4.2 more bios. I find that the Low Job Success Score/Incentive Scheme B group still significantly outperforms the baseline for number of correct bios (by an average of 10.293, pvalue of0.013), and the High Job Success Score/Incentive Scheme A group also outperformed the baseline for number of correct bios (by an average of 8.363, p-value of 0.054). The High Job Success Score/Incentive Scheme B and Low Job Success Score/Incentive Scheme Groups also slightly outperform, by 2 and 2.9 bios, respectively, but these differences are not statistically significant. For all of the groups, the difference from the baseline in terms of percentage

Spring 2017

“[W]orkers with low job success scores may have the most capacity for improvement when offered an incentive” of correct bios was not statistically significant. However, the High Job Success Score/Incentive Scheme B Group outperformed by an average of 9.1 percentage points, and the Low Job Success Score/ Incentive Scheme B group outperformed by an average of 14.7 percentage points. As the medium job success score groups no longer underperform all of the other groups in terms of percent correct, these modified bins may provide a more accurate baseline than the bins provided by Upwork (Table 5.5). These results seem to support the interpretation that most workers respond to the monetary incentives, but the responses are different for different subgroups of workers (and in aggregate, cancel each other out). Workers with low job success scores seem to respond to the incentive by significantly increasing their output (as measured by number of bios completed)—we can see that the coefficient in front of the interaction variable between the incentive and the group switches from negative to positive when a low job success score worker switches from Incentive Scheme A to Incentive Scheme B. However, the increase in quantity doesn’t appear to come at the expense of quality. Workers in the Low Job Success Score/ Incentive Scheme B group significantly outperform the baseline in terms of correct bios, while workers in the Low Job Success Score/Incentive Scheme A Group only slightly outperform. The Low Job Success Score/Incentive Scheme B group also has the highest outperformance in terms of percent correct, though this outperformance is not statistically significant. Workers with high job success scores appear to respond to the the incentive differently—it seems that they either focus too heavily on the quality aspect of the incentive (but are not very successful in improving their accuracy) or are simply less motivated by the incentive. Output (bios processed) declines with the introduction of the incentive, as the coefficient in front of the interaction variable between the

Columbia Economics Review

incentive and the group switches from positive to negative when a high job success score worker switches from Incentive Scheme A to Incentive Scheme B. Unsurprisingly, the number of correct bios also appears to decrease—the High Job Success Score/Incentive Scheme A group significantly outperforms the baseline in terms of correct bios, but the High Job Success Score/Incentive Scheme B group only slightly outperforms. However, it doesn’t appear that quality (as measured by percent correct) declines overall. Quality may even slightly increase, as the High Job Success Score/Incentive Scheme B group outperforms the baseline by 9.1 percentage points while the High Job Success Score/Incentive Scheme A group only outperforms the baseline by 3.3 percentage points, on average. How do incentives influence payment? I then regress the incentive dummy against the cost-related variables—hourly wage, bios processed per dollar spent, and correct bios per dollar spent. I find that the freelancers working under Incentive Scheme B were paid an average of $0.76 less than the freelancers working under Incentive Scheme A, a difference that is statistically significant (p-value less than 0.00001). There is also a statistically significant difference between the incentive groups in terms of bios processed per dollar—freelancers working under Incentive Scheme B processed an average of 1.78 more bios per dollar spent than freelancers working under Incentive Scheme A. There was not a significant difference between the groups in terms of correct bios processed per dollar (Table 5.6). As the average number of bios completed is approximately equal between the two incentive groups but the average worker in Incentive Group B earns a significantly lower wage per hour, it’s unsurprising that the average worker in Incentive Group B was paid significantly less per bio processed. Freelancers in Incentive Group B processed an average of 1.80 bios correctly per dollar spent, compared to 1.34 correct bios per dollar for freelancers in Incentive Group A, but this difference is not statistically significant. VI. Conclusion This paper examines a relatively new phenomenon—online labor marketplaces—and how employers in these marketplaces can best compensate online


Spring 2017 freelancers to optimize productivity. Through this experiment, I aimed to answer three questions: “How do incentives influence performance, in terms of both quality and quantity?”, “How do incentives affect different freelancers differently?”, and “How do incentives influence payment?”. I find that the incentive does not have a statistically significant effect on any of the outcome variables used to measure a freelancer’s performance (including variables measuring both quantity and quality of work). However, I cannot conclude that financial incentives are ineffective in motivating freelancers to improve performance. Instead, I consider the possibility that this incentive was simply not large enough or that it had opposite effects on different sub-groups of freelancers. It is worth noting that while the incentive didn’t have a positive overall effect on performance, it did allow us to pay the freelancers working under Incentive Scheme B a significantly lower rate for each bio processed. I then attempted to determine whether the effect of the incentive differs based on a freelancer’s observable characteristics, I speculate that the negative interaction between incentives and job success score occurs because freelancers with high job success scores either focus more on accuracy (which results in a decrease in speed) or find the incentive less motivating because they have opportunities to earn higher wages in other jobs. I also find that freelancers with a low job success score underperformed the baseline in terms of correct bios when not offered an incentive, but outperformed the baseline when offered an incentive. This trend flips for freelancers with high job success scores—they outperformed the baseline when not offered an incentive, and underperformed when offered an incentive. Though my results suggest that financial incentives do affect performance for specific sub-groups of freelancers, more research needs to be done before attempting to generalize my conclusions to a broader range of workers. As a result of my limited budget, I was only able to hire around 60 freelancers, which is a fairly small sample size. To make a definitive conclusion about whether this type of financial incentive is effective in improving performance, research should be conducted with a larger number of freelancers. Future research building off this paper should focus on how different

amounts and types of financial incentives affect performance differently, answering

Columbia Economics Review

49 the question of how to optimize freelancer performance while minimizing costs.n


50

Spring 2017

The Battle for Yield Macroprudential Policy and Non-bank Finance: Implications for Commercial Real Estate Credit Aidan T. Thornton University of Pennsylvania Thornton’s analysis sheds light on the post-Financial Crisis state of U.S. real estate credit markets. His study is concerned not only with the current financial health of the U.S., but also with risk management, as it pertains to future financial crisis prevention. Focusing on the macroprudential approach to supervisory financial policy, Thornton has produced a comprehensive and updated literature review. He suggests that non-bank financial institutions will continue to be relevant credit sources, as they remain unhindered by the capital regulations that more traditional lenders encounter. During this time of immense political transition in the U.S., Thornton’s research highlights a topic of special interest for future policy design. With possibilities of increased market efficiencies, his examination is both thorough and provocative. - M.V.C.

Context In the aftermath of the 2007-2009 Global Financial Crisis (GFC), financial supervisory authorities have prioritized the adoption of a policy framework that considers not only institution-specific risk, but also the risk emanating from the financial system as a whole—which contributes to systemic instability. Internationally, implementation of policies that aim to achieve this has already begun. These policies are mostly focused on ensuring the capital adequacy and access to liquidity of the banking system. In response to this change in the regulatory regime, there has been meaningful research regarding the impact of this framework on the banking system and the broader economy. Understandably, given the importance of residential mortgage markets as a source of the GFC, much of this research has focused on risks emanating from housing finance systems. This focus has also translated to considerable examination of the effect that post-crisis regulato-

ry reforms might have on housing finance systems. Substantial research has also been conducted regarding institutions

“[T]here is a negligible amount of research that considers the nexus between newly-implemented financial regulation, commercial real estate credit, and non-bank financial institutions.” Columbia Economics Review

outside of the traditional banking system as a means of circumventing regulation and their role in generating the conditions that fueled the GFC. However, there has been considerably less attention to commercial real estate credit markets and how they might be impacted by the implementation of this new policy framework. Additionally, relative to the entirety of research on non-bank financial institutions, very little focuses on their role in the commercial real estate credit markets. Finally, there is a negligible amount of research that considers the nexus between newlyimplemented financial regulation, commercial real estate credit, and non-bank financial institutions. This paper aims to contribute to this body of literature. The view of established and newlycreated supervisory authorities has been shiftingtoward a framework that integrates regulation of individual institutions’ risk managementpractices with supervision intended to ensure that these institutions are adequately prepared tosustain systemic shocks. Perhaps the


Spring 2017

“[T]his paper seeks to develop a framework for policy design that considers commercial real estate credit provision by non-bank financial institutions in an environment in which traditional banking institutions are subject to increased capital constraints.” most significant recent development in macroprudential policy is the implementation of new capital adequacy standards established by the Basel III Capital Accord. These standards, which are scheduled to be fully implemented by 2019, have already had effects on the financial system, particularly in the form of modified capital planning and lending activities by banking institutions. However, the impact of increased capital regulation on the broader financial system, particularly within credit markets, has yet to be fully understood. Given that these reforms are motivated by concerns for systemic stability, it is prudent to evaluate their impact on markets that have been demonstrated to be systemically significant, namely real estate credit markets. Since the 1990s, there have been substantial structural developments in the commercial real estate credit market, many of which could have significant effects on the financial system and its stability. Within this market, the increasing importance of financial institutions outside of the traditional banking system also raise questions of systemic stability. This paper has several objectives. First, it will establish a basis for understanding the impact of macroprudential policy measures, particularly capital adequacy standards, on the traditional banking system. This paper also aims to add to the understanding of how these structural changes within the banking system influence banking institutions’ activities in

credit markets—credit provision to the commercial real estate sector. The role of non-bank financial institutions within this context will also be considered. Finally, this paper seeks to develop a framework for policy design that considers commercial real estate credit provision by non-bank financial institutions in an environment in which traditional banking institutions are subject to increased capital constraints. Relevant Topics The newly-adopted global framework for supervisory authorities is considered “macroprudential” in an attempt to distinguish it from the largely firm-specific or “microprudential” framework that prevailed preceding the GFC. A macroprudential approach to financial supervision emphasizes the implementation of measures to ensure capital adequacy and access to liquidity of banking institutions. As such, these measures have ramifications for the financing structure and credit provision activities of these institutions. Financial institutions are active in a variety of credit markets. However, the market for commercial real estate credit can be distinguished from broader credit markets for several reasons. First, in commercial real estate the asset against which the credit is provided, is inherently prone to price cyclicality. The cyclicality of the underlying assets has a considerable influence on the corresponding credit, producing complexity beyond the standard macroeconomic relationship between asset prices and credit availability. Second, the term structure of most commercial real estate credit instruments is rarely fully-amortizing, subject to interest rate variability and generally has a maturity of ten years—all of which contribute to a need for periodic refinancing. Finally, the relatively high value of commercial real estate assets and the associated credit means that commercial real estate credit investments can represent a significant portion of an institution’s total assets. These distinct characteristics of commercial real estate credit demonstrate the importance of the asset class in any discussion of systemic stability. While it is well understood that macroprudential regulatory measures have a profound impact on the banking system, there are also indirect effects on nonbank financial institutions. These entities, which operate largely outside of the scope of traditional banking regulation, are of considerable importance to the financial system, particularly due to their activities Columbia Economics Review

51 in the market for commercial real estate credit. As this market has demonstrated a unique relevance to systemic stability, the role of non-bank financial institutions within the macroprudential regulatory regime demands consideration. Central Dynamic In addition to developing a framework for understanding the nexus between macroprudential policy, commercial real estate credit and the role of non-bank financial institutions, this paper makes several claims. First, I find that the implementation of macroprudential capital adequacy standards can produce a restriction of banking institutions’ provision of commercial real estate credit. Given sustained growth in the commercial real estate market, this trend will result in unmet demand. Second, I find that, encouraged by credit and real estate procyclicality, non-bank financial institutions largely

“I find that the implementation of macroprudential capital adequacy standards can produce a restriction of banking institutions’ provision of commercial real estate credit.” unaffected by macroprudential measures are uniquely poised to meet this demand. Finally, I consider the implications of this dynamic for policy design. The rest of the paper is organized as follows: Section II reviews macroprudential policy measures, specifically capital adequacy standards and their impact on the financial system. Section III considers the market for commercial real estate credit, focusing specifically on historical developments and the market structure. Section IV discusses the emergence of nonbank financial institutions, their role in commercial real estate credit markets and the implications for policy design. Section V provides concluding remarks.


52 Overview of Macroprudential Policy A macroprudential approach to financial supervision is characterized by a consideration of risk factors emanating from the entire financial system—systemic risk. This represents a broader approach than microprudential regulation, which seeks only to ensure sufficient risk management from shocks that arise as a result of individual institutions’ practices or failures In contrast, macroprudential policy is founded upon the interconnectedness of financial institutions and represents a systemic view, meaning that a macroprudential approach to financial supervision considers general equilibrium concerns and is focused on protecting the entire financial system. Macroprudential policy is particularly focused on the potential for a systemic shock to weaken access to credit provided by the financial system (Yellen, 2011). Given the importance of banking institutions’ activities to ensuring efficient financial intermediation and an adequate supply of credit in the economy, macroprudential measures have largely focused on the banking system. Thus, a macroprudential approach does not simply constitute enhanced financial oversight, but rather emphasizes a need for substantial preparation for inevitable systemic shocks. Broadly speaking, the implementation of macroprudential policy aims to produce more resilient banking systems by providing supervisory authorities with ‘policy tools’ design to mitigate systemic risk (Calem et al., 2016).This is accomplished by attempting to minimize external costs produced by impairment to institutions’ balance sheets as a result of a systemic shock (Hanson et al., 2011). Ensuring this degree of adequate protection against these shocks requires mechanisms by which supervisory authorities can compel banking institutions to modulate their structure and activities in accordance with stated policy goals. The design of new policy tools incorporates a consideration of cyclicality and instability in financial markets. As such, they are specifically aimed at “countering the procyclical nature of credit and leverage, leaning against the wind when systemic risk is accumulating” (Yellen, 2011, p. 8). The need for macroprudential policy, demonstrated by the instability of the banking system during the GFC, has resulted in the most powerful policy tools being increased standards for capital adequacy and access to liquidity for banking institutions. The broad scope of the new macroprudential policy regime raises questions

Spring 2017 about the motivations for such stringent regulatory reform. The global implementation of macroprudential policy has largely been a response to the severity of the GFC. While significant financial regulation was in the prior period, the GFC demonstrated that the regulatory approaches in place—traditional stabilization policy and microprudential financial supervision—were clearly insufficient for the management of systemic risk. Yellen (2011) asserts the insufficiency of traditional stabilization policy in addressing systemic risk, observing that “monetary policy cannot be a primary instrument for systemic risk management. First, it has its own macroeconomic goals on which it must maintain a sharp focus. Second, it is too blunt an instrument for dealing with systemic risk” (p. 5). The infeasibility of cross-country monetary policy coordination, in addition to the increasingly global interconnectedness of financial markets and banking institutions renders monetary policy as an ineffective approach to managing systemic stability. The difficulty of cross-country coordination also impairs other regulatory approaches. Although broad-based risk management policies were already in place before the crisis, their focus was largely limited to supervision of individual banking institutions’ risk management practices. Capital adequacy standards for individual banking institutions may have been sufficient with respect to endogenous risk emanating from their balance sheets, but were clearly not considerate of the need for sufficient high-quality capital to protect against exogenous risk from the broader financial system. As demonstrated during the GFC, much of this exogenous risk is derived from bubbles in specific asset markets, which, “in the presence of arbitrage, occur pro-cyclically, and the result is the production of systemic risk as liquidity providers increase their lending based on current abovemarket-fundamentals pricing of these assets” (Pavlov and Wachter, 2009, p. 453). This phenomenon demonstrates that over-inflated asset prices, particularly of real estate-related securities, impacted the overall bank balance sheet, driving banking institutions to further extend credit. As a result, most appeared to be well-capitalized, but this appearance overlooked the systemic risk emanating from the inflation of asset prices beyond fundamental values. As with any change in the structure of regulation, the proposed macroprudential policy should be subjected to an evaluation of its relative costs and benColumbia Economics Review

efits. On a microeconomic basis, banking institutions are distinct from other firms in their heavy reliance on short-term debt to meet their financing needs” (Kashyap et al., 2008). From a systemic perspective, higher capitalization can be substantially beneficial, as the marginal cost of higher capital ratios is far outweighed by the total cost of financial crises (Schanz et al., 2011). However, it is difficult to determine the total effect of the costs of higher capital ratios on the overall financial system. In designing a macroprudential policy framework, it is therefore important to prioritize achieving a balance between ensuring systemic stability and maintaining an environment that encourages financial innovation, which ultimately results in greater financial efficiency. Basel III Framework

“In designing a macroprudential policy framework, it is therefore important to prioritize achieving a balance between ensuring systemic stability and maintaining an environment that encourages financial innovation.” Of the array of macroprudential policy measures available to supervisory authorities, standards for capital adequacy and access to liquidity are often the most discussed. This is understandable, as a change in required capital or liquidity can have a discernible impact on banking institutions’ financing strategies and activities in the financial markets. With respect to capital adequacy standards, the regime under Basel III is intended to increase banking institutions’ retention of high quality capital to improve loss absorption capacity in the event of a systemic shock (Noss and Toffano, 2014). As regulatory capital is calculated relative to an institution’s balance


Spring 2017 sheet and its risk, requiring more and higher quality capital can influence banking institutions by two channels: changes in financing via liabilities and changes in the composition of assets. A modification of the composition of assets by a banking institution will take the form of a change in the instruments in which the institution invests. This adjustment would achieve a goal of the Basel III framework, which is to prevent accumulation of risk exposure in a banking institution’s assets that would exceed its loss absorption capability (Calem et al., 2016). As such, any increase in capital requirements will have a profound impact on credit markets, in which the banking system plays a major role. Capital adequacy concerns, in some form, have always existed in fractional reserve banking systems. However, the degree to which these concerns are codified in policy has fluctuated over time. Moreover, the consideration of capital adequacy standards as a means of protecting against systemic risk is also a relatively new development in the scope of historical financial supervision. The broader systemic risk framework under Basel III is demonstrated in its design of capital adequacy standards, which seek to reduce the overall procyclicality of credit supply and the leverage of banking institutions (Yellen, 2011). In addition to controlling excessive leverage, Basel III policy tools are also designed to increase the resilience of the banking system against systemic shocks and ultimately to prevent negative effects on the real economy (Thompson, 2016). This framework represents a shift away from views on capital adequacy that are largely riskinvariant. The design of capital and liquidity requirements under Basel III is demonstrated through several new policy tools that establish standards for capital adequacy and access to liquidity in the banking system. To ensure that a banking institution’s’ balance sheet composition provides sufficient access to liquidity, supervisory authorities have begun to utilize two key measures: the Liquidity Coverage Ratio (LCR), which ensures sufficient access to liquidity to meet short-term demand, and the Net Stable Funding Ratio (NSFR), which attempts to diminish excess reliance on unstable short-term liabilities to finance long-term assets. With respect to capital adequacy standards, the Basel III framework is far more nuanced than its predecessor. Aside from overall increases in required capital, these standards also include increased capital requirements for banking institu-

tions deemed ‘systemically important’ as well as the ability for national supervisory authorities to impose additional ‘countercyclical’ capital requirements during an expansion of credit. The process for determining adequate levels of required capital under Basel also represents a shift towards greater complexity, as the process for calculating risk exposure has been updated to account for recent financial innovations (Basel Committee on Banking Supervision, 2010/2011). This paper is focused primarily on Basel III’s modifications of capital requirements on banking institutions (both in the form of requiring additional capital as well as the institution of a more robust system of risk-weighting assets).

“In the absence of regulatory pressures, banking institutions have little incentive to retain capital beyond the amount that is microeconomically deemed sufficient to ensure their viability.” Impact of Capital Requirements on the Financial System Although the entirety of the Basel III regulations will not be fully implemented until 2019, their impact on the financial system, particularly the banking system, has already been demonstrated. Banking institutions, relative to all other types of firms, finance their activities through high leverage. This naturally creates a friction between supervisory authorities and banking institutions with respect to the the ideal level of capital adequacy as well as the precise definition of ‘high quality capital.’ In the absence of regulatory pressures, banking institutions have little incentive to retain capital beyond the amount that is microeconomically deemed sufficient to ensure their viability with respect to the overall balance sheet. This is due, in part, to the relatively high cost of equity financing for banking instiColumbia Economics Review

53 tutions. Although debt financing is generally less costly than equity financing for most firms, this differential is particularly significant in the case of banks due to structural governance issues (perhaps because equity interests in banking institutions’ are subordinated to substantial amounts of debt). Demonstrating this, even during periods of market stability, investors demand a premium for supplying banking institutions with large amounts of equity capital due to a fear that an equity interest in a banking institutions is susceptible to poor governance practices (Kashyap et al., 2008, p. 434). Given these unique structural factors, it is expected that banking institutions rely heavily on debt to finance their balance sheets—a characteristic of the banking sector that has profound systemic effects. If equity financing is costlier than financing through short-term debt, banking institutions will take on high levels of leverage—while internalizing the benefits but externalizing some of the costs (Hanson et al., 2011). Naturally, the financing of banking institutions through excessive short-term debt can have potentially damaging systemic effects, particularly as a result of the significant role of banking institutions in financial intermediation within the economy. Banking institutions’ avoidance of significant capital retention is not purely theoretical. Even recently, willingness to go to great lengths to avoid retaining capital beyond the amount required by regulation has been demonstrated. This was particularly evident between 2003 and 2007, when, banking institutions engaged in securitization explicitly to avoid capital requirements, rather than simply to share risk with investors (Acharya and Richardson, 2009). Regulatory capital requirements attempt to combat banking institutions’ disincentive to retain ‘socially optimal’ levels of capital. As required regulatory capital generally exceeds the amount of capital that banking institutions deem economically efficient to hold in the absence of regulation, it is necessary to understand the impact that increased capital regulation has on the cost of financing. Noss and Toffano (2014) observe that, particularly during a credit boom, increases in required capital could lead to increases in the cost of financing for banking institutions. This dynamic can be understood in the context of the ‘irrelevance principle’ in the decision to finance a firm through debt or equity (Modigliani and Miller, 1958). Capital requirements result in the value of a banking institution being influenced by


54 its capital structure. The degree of impact on financing costs is dependent upon the composition of both assets and liabilities (Schanz et al., 2011). Banking institutions, therefore, are incentivized to alleviate upward pressure on financing costs by modifying the composition of their balance sheets. This can be achieved, in part, through changes in the banking institution’s revenue-generating activities— particularly the provision of credit. Considering the impact of this dynamic on credit markets, Thompson (2016) anticipates that even highly capitalized banking institutions will be affected by Basel III, which will reduce the supply and increase the cost of bank-intermediated credit. However, as previously discussed, it is difficult and costly for banking institutions to raise large amounts of equity capital, even during periods of economic expansion. But, there is another method by which banks can minimize the impact of required capital. Under the Basel III framework, capital requirements depend upon the risk exposure of a banking institution’s’ asset portfolio. The significance of risk-weighting in this framework further supports the proposition that banking institutions will be incentivized to reduce upward pressure on financing costs by transmitting this pressure to the real economy by increasing the cost and decreasing the supply of credit provided (Noss and Toffano. 2014). The emphasis on risk-weighting of assets under Basel III also particularly incentivizes banking institutions to minimize their engagement in the provision of credit instruments that are subject to a high risk-weighting. Due to commercial real estate’s designation as a ‘leveraged asset class,’ any change in costs and liquidity in credit markets would have significant effects. Specifically, the development and acquisition of commercial real estate almost always requires some form of debt financing, and is therefore uniquely reliant on credit markets, particularly bank-intermediated credit. The availability of credit “is a key factor in each stage of CRE activity. Builders, owners and users depend on the smooth functioning of the financial markets to bridge the gaps between expenses and income” (Meeks, 2008, p. 5). The importance of bank-intermediated credit to commercial real estate has also been historically documented. For example, Jackson et al. (1999) observed that, in the early 1990s, real estate was particularly affected by pressure on banking institutions’ capital. The state of the commercial real estate market is therefore

Spring 2017 inextricably linked to the state of credit markets, and distortions in bank-intermediated credit to the commercial real estate sector specifically could have significant consequences. As discussed earlier, banking institutions may attempt to reduce capital pressure by reducing exposures considered by supervisory authorities to be particularly risky—of which the commercial real estate sector is one. The methodology established by Basel III represents a shift from previous standards in the view of assets’ relative risk and corresponding capital requirements. With respect to commercial real estate, high volatility commercial real estate loans are granted a 150% capital reserve requirement, relative to 100% for most commercial loans and 50% for residential mortgages (Thompson, 2016). This demonstrates a view within the Basel III framework that commercial real estate credit is highly risky and necessitates a great deal of capital reserves. The perception of the risk of commercial real estate credit investments by regulators was recently demonstrated at the national level in the United States, as authorities warned banking institutions to reduce exposure to commercial real estate or increase capital reserves (Thompson, 2016). This view of the risk of commercial real estate credit is not without basis. For example, Calem and LaCour-Little (2004) find that, for commercial real estate debt instruments, even small differences in the loan-to-value ratio can have significant impacts on the prudent amount of capital to be held. Additional risks emanate from the term structure of these credit instruments and the inherent cyclicality of the underlying commercial real estate assets. Increased capital requirements for commercial real estate due to supervisory authorities’ view of the sector’s risk increases the cost of financing investment for these assets. Barring a substantial increase in the yields for underlying commercial real estate assets, many banking institutions will be decreasingly incentivized to provide credit to the commercial real estate sector, reducing liquidity within these markets. Idzelis and Torres (2015) observe that increased capital requirements for financing of commercial real estate activities has led to reduced profitability in this sector, demonstrating that even the partial implementation of Basel III capital requirements has already had discernible effects on the participation of banking institutions in the commercial real estate credit markets. While banking institutions continue to play an active role in this market, reguColumbia Economics Review

latory pressure has led to banking institutions representing 31% of commercial real estate lending in late 2016, relative to previous highs in excess of 40% (CBRE Research, 2016b). This trend is expected to continue, decreasing bank-intermediated financing of commercial real estate transactions and development (CBRE Research, 2016a). Reduced provision of credit by increasingly capital-constrained banking institutions, particularly to commercial real estate markets, will undoubtedly result in costlier commercial real estate credit. Increases in interest rates could further exacerbate this dynamic. As Peng (2010) demonstrates, changes in the credit spread are positively correlated with the commercial real estate risk premium. Increasingly costly commercial real estate credit may be seen as an opportunity for some market participants. CBRE Research (2015), observing that capital-constrained banking institutions are being prompted to increase costs and reduce supply of credit provided to commercial real estate, considers this be an opportunity for non-bank financial institutions. These institutions unfettered by Basel III capital regulations, particularly those previously reticent to increase credit provision to the commercial real estate market, could respond to the decreasing competitiveness of banking institutions and increasing yields on commercial real estate credit by beginning to engage in the market (or increasing their engagement). Market Innovation in Commercial Real Estate Credit The current market for commercial real estate credit in the United States emerged in the 1970s, and was accompanied by increased integration between the domestic market for commercial real estate credit and global capital markets. Beginning in the 1970s, financial institutions began to determine commercial real estate lending rates relative to bond yields, which represented a “very crude but effective method of creating a proxy rate for real estate. From this and other instances, capital market linkages to real estate were born” (McCoy, 2011, p. 47-8). Although commercial real estate markets, like all real estate markets, are still characterized by structural inefficiency, this integration represented access to the liquidity and effectuality of broader credit markets. This phenomenon of integration was furthered by the increasing role of securitization in commercial real estate markets. Between 1976 and 2003 financial


Spring 2017 markets experienced a rise in securitization as well as deregulation. In response, large banking institutions increased their credit exposure to real estate markets, and commercial real estate in particular (Zarutskie, 2013). Commercial mortgage-backed securities (CMBS) were the primary vehicle for private securitization of commercial real estate credit instruments (Antoniades, 2016). However, this shift in financing for commercial real estate created substantial consequences for global financial stability.Securitization of commercial real estate financing was accompanied by rapid price appreciation of commercial real estate assets beyond fundamentals in the period preceding the GFC (Levitin and Wachter, 2013). Although the GFC was largely blamed solely on the housing finance system in the United States, the role of commercial real estate raises important questions about the similarities and differences between the markets for residential and commercial real estate credit. Both markets are significant due

to their relative size (although the market for residential real estate credit is much larger than the market for commercial real estate credit). In addition, as both commercial and residential real estate are ‘leveraged assets,’ their heavy reliance on debt financing inextricably links them to the financial system. This link is made more intricate by the complexity of the underlying assets. Although microeconomic analysis of borrower decision making in residential real estate credit is relatively simple, commercial real estate credit presents far more complexity. Analysis of borrower decision making in commercial real estate is complicated by the sheer number of entities involved. Relative to residential real estate, “where a property most often has one equity player (the mortgage holder) and one debt player (the mortgage bank), commercial real estate properties are often financed by multiple debt and equity players” (Steering and Advisory Committee — Asset Price Dynamics Initiative, 2016, p. 23). The structure of commercial real

Columbia Economics Review

55 estate credit instruments themselves also contributes to complexity. This is demonstrated, in part, by the nature of defaults on these instruments. In contrast to “residential defaults that result from failure to maintain monthly mortgage payments, commercial real estate defaults are most often ‘maturity defaults’ in which the borrower is unable to borrow a large enough sum to pay off an expiring loan. The difference between the balloon payment owed on the maturing loan and the amount that can be borrowed today is the ‘equity gap.’ The equity gap is caused by two factors: falling valuations of commercial real estate and lack of liquidity” (Marsh, 2011, p. 35). Default structure presents evidence of considerable risk that is due, in part, to the perspective of credit providers in commercial real estate. The systemic significance of the market for commercial real estate credit is further fueled by the increasing interconnectedness of global credit markets (upon which commercial real estate relies for financ-


56 ing). This trend has resulted in a view of commercial real estate as a “global asset” that, in the case of misalignment between prices and fundamentals, can produce systemic risk, demonstrating the significance of commercial real estate and its associated credit (Steering and Advisory Committee — Asset Price Dynamics Initiative, 2016). Financing of commercial real estate is a dynamic process including typically including “periods of extensive refinancing and appraisals, renovations and restructurings of facilities themselves and the paper associated with these during the life of commercial properties” (Lahm et al., 2011, p. 6). The dynamic nature of commercial real estate financing and reliance on credit has resulted in constant engagement between the commercial real estate markets and the financial system. While lenders for residential mortgage debt are highly concerned with the leverage of the collateral against which they are providing credit, commercial real estate lenders are found to emphasize commercial real estate assets’ ability to meet operating expenditures and debt service requirements (Marsh, 2011). The lack of focus on the overall leverage of the collateral property is concerning, particularly as Kau et al. (2009) demonstrates that this leverage (in terms of the loan-to-value ratio) is a major driver of the default probability of commercial real estate debt. In spite of the smaller size of the commercial real estate credit market relative to its residential counterpart, the complexity of the instruments as well as the underlying real estate assets demonstrates that the market is of considerable systemic significance. Current Structure of the Commercial Real Estate Credit Market Although smaller than the market for residential real estate credit, the market for commercial real estate credit is substantial. At the end of 2015, the combined outstanding debt of commercial and multi-family real estate represented approximately $3.61 trillion (Board of Governors of the Federal Reserve System, 2015). In addition to traditional commercial banks and depository institutions, participants in the market for commercial real estate credit include “asset-backed securities (CMBS) issuers, life insurance companies, government-sponsored enterprises, governmental entities, finance companies, real estate investment trusts, pension funds, and others” (Harper and Everett, 2015, p. 1). The involvement of

Spring 2017 bank and non-bank financial institutions is crucial to the commercial real estate market due to the constant reliance on credit financing, which produces what Lahm et al. (2011) term a “mutuality of interests.” The size of the commercial real estate credit market, in addition to its integration with the financial system, raises questions with respect to systemic risk. Within the traditional banking system, specifically, commercial real estate credit represents a sizeable portion of the total portfolio that is highly scrutinized by regulators (Woo, 2011). The demonstrated systematic significance of the commercial real estate credit market raises important questions about the design and risk of the particular instruments of which the market is composed. Relative to residential real estate credit, commercial real estate credit instruments are often characterized by larger

“The disparity between constrained supply and sustained demand for commercial real estate credit will result in higher yields for institutions with deployable capital.” balances, complexity in the sources of repayment, and very rarely structured to be fully amortizing (Levitin and Wachter, 2013). These factors contribute to the perceived risk of the instruments. However, investor perception of risk is also influenced by the underlying commercial real estate assets, the prices of which have demonstrated historical volatility (Igan and Pinheiro, 2009). This risk has also been demonstrated in the categorization of commercial real estate credit as a highly risk-weighted asset class under the new framework for capital regulation established in Basel III. The market for commercial real estate is also currently undergoing a shift in the structure of the supply of credit due to capital constraints on banking institutions. However, sustained demand for commercial real estate has continued as “sales of commercial properties exclud-

Columbia Economics Review

ing hotels in 2015 surpassed 2007 volumes, which drove commercial real estate loan volume to a near-record total of $504 billion” (Bennett and Cacciapaglia, 2016b, p. 2). Naturally, this demand for commercial real estate assets has contributed to a great deal of demand for credit financing. This dislocation between supply and demand trends has led to a prediction that disparities in commercial real estate financing could emerge (Commercial Real Estate Finance Council, 2015). Indeed, in some segments of commercial real estate credit markets, financing disparities have already appeared, placing upward pressure on required yields. In the market for securitized products, Bisbey (2016) observes dramatic increases in risk premiums for commercial mortgage bonds. Additionally, in the CMBS market, another constraint of commercial real estate credit may be set to occur. A large degree of CMBS characterized by low underwriting standards were expected to mature between 2016-2017, with approximately $92 billion expected to come to maturity in 2017 alone (Mooney, 2016). The disparity between constrained supply and sustained demand for commercial real estate credit will result in higher yields for institutions with deployable capital. Financial institutions outside of the scope of traditional banking regulation, unfettered by capital regulation, are well poised to take advantage of this opportunity. Emergence of Non-bank Financial Institutions Many have referred to non-bank financial institutions as operating in the “shadow finance” or “shadow banking” system, an unnecessarily pejorative moniker that has no precise definition, academic or otherwise. However, there is agreement that it refers a system of entities that perform some or all of the core functions of traditional banks while unencumbered by the regulatory oversight to which the traditional banking system is subjected. Examples of these institutions include hedge funds, private equity firms and other institutions that do not finance operations through deposits (Thomas, 2013). Similar to traditional banks’ demand deposit-funded credit intermediation, non-bank financial institutions engage in the liquidity and maturity transformation that banking institutions undertake, financed through non-deposit short-term liabilities (Unger, 2016). Paralleling the manner in which individual institutions in the banking system are


Spring 2017 connected through interbank lending or other short-term financing, non-bank financial institutions are connected via vertically-integrated intermediation, financing activities largely through securitization (Adrian et al., 2010). It is important to note that these non-bank financial institutions operate as an interconnected network outside of the scope of traditional banking supervision. Although non-bank financial institutions replicate many of the functions of traditional banks, they do so largely outside of the scope of most banking regulations, including standards for capital adequacy and liquidity access. This allows these institutions to take on the risks of traditional banking functions without being required to retain additional capital (Meeks et al., 2014). The process of credit securitization, which is unencumbered by traditional banking supervision, presents a prime example of this particular practice of commercial real estate credit provision. England (2011) affirms this by explaining that, preceding the GFC, the level of mortgage securitization could be considered the level of non-bank financial institutions in the mortgage market. It is important to note, however, that nonbank financial institutions are not entirely distinct from the traditional banking system. In addition to direct transactions with non-bank financial institutions, institutions within the banking system often engage in off-balance sheet financial intermediation through entities such as special purpose vehicles (SPVs). Of the market activities in which nonbank financial institutions engage, liquidity provision in the credit markets is the one of the most concerning to supervisory authorities utilizing a macroprudential approach. This is largely due to the inherent risk (both transaction- specific and systemic) associated with credit exposure. Much of this risk is dependent on characteristics specific to a certain instrument as well as the market for those instruments. Given the uniquely risky properties of the market for commercial real estate credit, it is understandable that the involvement of non-bank financial institutions in this market would be of concern to supervisory authorities. The emergence of institutions that perform financial intermediation while unencumbered by banking regulation is considered by many to an eventuality of the incentive for regulatory avoidance. Yellen (2011) recognizes a constant incentive to engage in risky activities outside of the scope of supervision. The growing significance of non-bank financial institutions is perhaps best exemplified by

the rise of private-label securitization in the period preceding the GFC. The securitization process “allowed banks to transfer these risks from their balance sheets to the broader capital market, including pension funds, hedge funds, mutual funds, insurance companies and foreign-based institutions” (Acharya and Richardson, 2009, p. 199). This also demonstrates the significant systemic impact of engagement between the traditional banking system and non-bank financial institutions. Paradoxically, it may have been traditional banks’ engagement with non-bank financial institutions that contributed to

“Paradoxically, it may have been traditional banks’ engagement with non-bank financial institutions that contributed to the ability of non-bank financial institutions to more competitively engage in financial intermediation.” the ability of non-bank financial institutions to more competitively engage in financial intermediation. Thomas (2013) demonstrates that non-bank financial institutions’ activities have accounted for a large share of the decline in traditional bank-intermediated credit. Although there is demonstrable competition between banking institutions and non-bank financial institutions, the rising influence of non-bank financial institutions has broadly contributed to the growth of the overall financial sector as a proportion of national income, a process Nersisyan and Wray (2010) term the ‘financialization’ of the economy. This increased exposure can contribute to a greater degree of economic instability. The ability of non-bank financial institutions to compete for market share with a traditional bank is highly dependent upon overcoming the competitive advantage of banking institutions, given their access to cheap financing through Columbia Economics Review

57 insured demand deposits, discounted lending through the interbank market and access to liquidity via the Federal Reserve as a lender of last resort. In spite of this considerable competitive advantage, non-bank financial institutions are still able to compete through financial innovation, such as accessing “sources of inexpensive funding for credit by converting opaque, risky, long-term assets into money-like and seemingly riskless short-term liabilities” (Adrian et al., 2010, p. 2). As such, the trend of ‘financialization’ appears set to continue. Schwarcz (2013) estimates that credit provided by non-bank financial institutions effectively rivals, if not exceeds, credit provided by banking institutions. The growth of nonbank financial institutions relative to the banking system raises questions of the advantages and hindrances of engaging in financial intermediation outside of the scope of traditional banking regulation. In the context of increasing capital adequacy standards for traditional banks, it is noteworthy that non-bank financial institutions are able to finance themselves with higher leverage than traditional banking institutions (Meeks et al., 2014). As non-bank financial institutions do not enjoy some of the competitive advantages of traditional banking institutions (deposit insurance, interbank lending and access to liquidity via central banks), this also means that they lack some support mechanisms to mitigate the risk of a run on their liabilities—a risk shared with traditional banking institutions (Unger, 2016). This vulnerability contributes to systemic stability concerns. Non-bank Financial Institutions and Commercial Real Estate Credit Non-bank financial institutions, with their significant capital flexibility, would be well-suited to tolerating the risk that stems from the high leverage, cash flow instability and asset price cyclicality endemic to the commercial real estate asset class. In addition, the potential to offload risk through off-balance sheet transactions and securitization would increase the risk-adjusted return of commercial real estate credit investments. Non-bank financial institutions already play a significant role in the market for commercial real estate credit, particularly in the United States, where they have significant decreased financing costs (CBRE Research, 2015). Non-bank financial institutions in commercial real estate credit, which include debt funds, REITs, and other private high yield sources of capital are also more likely to invest in value-add and


58 opportunistic commercial real estate debt instruments (CRE Finance Council, 2015). Increased activity by non-bank financial institutions has also been demonstrated to increase in tandem with regulatory capital requirements. Bedendo and Bruno (2010), in an evaluation of U.S. commercial banks’ credit risk transfer strategies, demonstrate that institutions engaged in real estate lending, when faced with liquidity or capital constraints, are more likely to engage with non-bank financial institutions by off-loading credit risk through securitization. This propensity indicates that future conservatism in bank-intermediated commercial real estate credit would shift financing demands to non-bank financial institutions. The reticence of banking institutions to engage in the markets for commercial real estate has already resulted in a reduced supply of credit. Sustained demand for capital has already shifted to non-bank financial institutions with hedge funds which, as of 2016, hold approximately 40% of subordinate CMBS positions (Murray and Clarke, 2016). The structural shift towards commercial real estate provided by non-bank financial institutions was also demonstrated by Idzelis and Torres (2015), as “U.S. private funds that target debt investments in commercial real estate raised a record $14.2 billion [in 2014], a 67 percent jump from 2013 and up from just $1.7 billion in 2010” (p. 1). However, rather than simply wresting away market share from the traditional banking system, the increasingly important role of non-bank financial institutions in commercial real estate credit has actually contributed to increasing integration between the two. Although the newly implemented regime for capital regulation attempts to manage risk exogenous to the balance sheet, monitoring this risk is especially difficult. Banking institutions have been able to minimize the impact of capital regulation by engaging with non-bank financial institutions through the pursuit of activities external to the regulated balance sheet even before the implementation of Basel III capital requirements (Thomas, 2013). In the period preceding the GFC, banking institutions’ utilization of off-balance sheet structured investment vehicles (SIVs) “increased supply of mortgage financing for housing, commercial real estate lending, and consumer lending” (Palley, 2012, p. 64). Given this, one can expect that current and future increases in capital requirements may further incentivize banking institutions to become more integrated with non-bank

Spring 2017 financial institutions through off-balance sheet methods to minimize upward pressure on financing costs. In addition to increased complexity in the credit risk exposure of banking institutions, formal engagement with non-bank financial institutions has also increased. In 2014, one of the highest growth categories of banking institutions’ credit was direct lending to non-bank financial institutions (Idzelis and Torres, 2015). During that year, the change in the volume of this lending represented an increase of “36% or $47.3 billion” (von Jena, 2015, p. 1). Increasing integration between non-bank financial institutions and banking institutions furthers concerns for systemic stability. Direct and indirect exposure of banking institutions to non-bank financial institutions can lead to greater vulnerability of the economy to systemic shocks (Meeks et al., 2014). The risk of systemic shocks as a result of highly-leveraged non-bank financial institutions is also dependent upon the composition of these institutions’ credit investments. With respect to commercial real estate credit instruments, Igan and Pinheiro (2009) demonstrate that institutions “with high loan-deposit ratios and large share of real estate loans in their lending activities are more likely to be among the most vulnerable” (p. 14) to systemically-based shocks. Greater interconnectedness between non-bank financial institutions and banking institutions, in addition to increased leverage of nonbank financial institutions, raises several important concerns regarding the design of macroprudential policy. Implications for Policy Design Non-bank financial institutions operate, by definition, outside of the scope of traditional banking regulation. Supervision of the banking system aims to mitigate the systemic impact of ‘responsibility failure’—a form of market failure that occurs when a firm successfully externalizes the risk of an activity while still benefitting. Responsibility failures and their broader impact are well characterized by Schwarcz (2013), which could include “(i) a firm profiting by issuing short-term debt to fund long-term projects, thereby taking a liquidity risk which could cause systemic and other consequences if the firm defaults on repaying its maturing short-term debt; and (ii) the limited liability of investors who manage a firm, making it more likely that they will cause the firm to take outsized risks, hoping for outsized gains” (p. 22). The implementaColumbia Economics Review

tion of the new macroprudential regime has resulted in far less risk of responsibility failure from traditional banking institutions as they have “been substantively regulated to maintain certain levels of financial responsibility” (p. 27-8). However, the increasing importance of non-bank financial institutions illuminates potential issues in this approach. A macroprudential approach to financial supervision, particularly capital adequacy standards, relies heavily on the ability of supervisory authorities to monitor risk exposure across the entirety of the financial system. However, as Calem and LaCour-Little (2004) demonstrate, engagement in regulatory capital arbitrage by traditional banks and increased financial intermediation by non-bank financial institutions are problematic for macroprudential policy implementation, as they obfuscate regulators’ view of the the degree of capital adequacy within the financial system. Financial intermediation outside of the scope of traditional supervision, specifically off-balance sheet activities by traditional banking institutions, creates ‘indirect’ risk exposure that is difficult to measure, but “may turn out to be as debilitating as direct exposure. For example, if a bank has lent heavily to non-bank financial intermediaries such as finance companies that engage in real estate lending, it may be taking on substantial additional exposure to the real estate” (Herring and Wachter, 1999, p. 22). Increases in required capital that is measured largely with respect to direct exposures are inefficient in a regulatory framework that attempts to mitigate the effect of systemic risks. Preceding the Global Financial Crisis (GFC), the risks of indirect exposures, particularly risks emanating from “real estate exposure and its coverage through usual capital requirements [had] not given any early warning signs, yet it was the exposures concentrated in off-balance sheet items that triggered problems” (Igan & Pinheiro, 2009, p. 4). Indirect exposures, as a result of the rise of nonbank financial institutions, are concerning because of their lack of transparency. Much of this stems from a concern related to banking institutions and non-bank financial institutions, namely a “fire-sale risk associated with excessive short-term funding comes from not just insured depositories, but rather, any financial intermediary whose combination of asset choice and financing structure may exacerbate a systemic fire-sale problem” (Hanson et al., 2011, p. 13). The mounting role of non-bank financial institutions is


Spring 2017 a clear indication of the insufficiency of traditional macroprudential regulatory measures, which aim to control systemic risk exposures. The increasingly integrated global financial system has effects on not only the interaction between credit and com-

“[T]his dynamic presents the potential for a long-term trend in which non-bank financial institutions are considerably more active in the commercial real estate credit market.” mercial real estate markets, but also the interaction between these markets across countries. Madam et al. (2013) demonstrate this in an evaluation of the negative impact that the GFC had on the commercial real estate sector in India. This effect was also recognized well before the GFC. Peek and Rosengren (1997) show this in the connection between the decline of Japanese commercial real estate prices during the 1990s with the demonstrable impact that this exogenous loan supply shock had on real economic activity in the United States. Clearly, the nature of commercial real estate credit specifically poses distinct implications for risk management. Assets within the commercial real estate market are characterized by distinct leverage risk, as they are often “highly leveraged. Real estate developers usually operate with a minimum of capital in order to shift as much risk as possible to the lender. Banks generally try to protect themselves by requiring low loan-to-value ratios, guarantees, takeout commitments for longer-term financing, and strict loan covenants that will protect them against risky behavior by the developer after the loan is made. But when real estate markets become overheated, underwriting standards deteriorate”

(Herring and Wachter, 1999, p. 23). The deterioration of underwriting standards can also be fueled by increasing complexity of credit instruments and the assets against which they are lent. This complexity contributes to systemic instability, particularly when considered in tandem with the inherent cyclicality of market for underlying commercial real estate assets. This is also relevant to the interaction between commercial real estate and credit markets, as asset price inflation is typically associated with an underpricing of credit risk (Pavlov and Wachter, 2009). The inherent risk of commercial real estate credit, combined with the increasing difficulty of monitoring exposures through participation of the shadow finance system, poses important concerns about the viability of macroprudential capital regulation to achieve its stated goal of supervising and regulating systemic risks within the financial markets. As previously discussed, it is crucial that the design of macroprudential policy manage systemic risk without overly encumbering the environment for financial innovation. Non-bank financial institutions are crucially important to greater efficiency of financial intermediation. Systemic concerns only arise when the costs of risk-taking are highly externalized. On an individual level, risk-taking by non-bank financial institutions is less concerning because the firms are often financed by more risk-tolerant private capital, as opposed to demand deposits. Additionally, non-bank financial institutions are far less consolidated than banking institutions. Earlier macroprudential policy frameworks have understandably focused on the banking system because of the important role that banking institutions have within the real economy. As such, any design of macroprudential policy that aims to diminish negative externalities stemming from non-bank financial institutions should focus on the exposure of banking institutions to non-bank financial institutions. The reasons for this are twofold: first, highly consolidated banking institutions contribute more to systemic risk because of the potential implications for financial markets in the event of their failure. Second, traditional banking institutions take advantage of many ‘sociallyprovided’ benefits that mitigate their risk (deposit insurance; access to the central bank as a lender of last resort; discounted interbank lending; potential assistance from the U.S. Treasury Department). This establishes an explicit public interest in reducing the ‘social cost’ of these Columbia Economics Review

59 benefits, which can be achieved through supervision. As the current supervisory regime relies on measurement of the risk exposure of institutions, macroprudential capital adequacy standards should therefore be designed to more effectively measure systemic risk exogenous to banking institutions. The framework established by Basel III does attempt to achieve this through more effective measurement of counterparty risk, progress in the development of more effective risk measurement methodologies is needed. Improvement of these methodologies requires greater involvement and collaboration between banking institutions, non-bank financial institutions, supervisory authorities and academics. Conclusion This paper has developed a framework for understanding the nexus between macroprudential policy, commercial real estate credit and the role of institutions outside of the traditional banking system. Within this framework, this paper has demonstrated that the implementation of macroprudential capital and liquidity requirements may constrain the provision of commercial real estate credit by banking institutions. This development, when combined with sustained demand for commercial real estate assets and accompanying credit financing, will almost certainly result in a disparity between supply and demand for commercial real estate credit. Encouraged by this opportunity, as well as the procyclical nature of real estate and credit markets, non-bank financial institutions will be able to meet this demand, as they remain relatively unfettered by capital adequacy standards. While uncertainties in the upcoming ‘Wall of Maturities’ in the commercial mortgage-backed securities market may impact the timing of this shift, this dynamic presents the potential for a long-term trend in which non-bank financial institutions are considerably more active in the commercial real estate credit market. Given these institutions’ roles within the broader financial system, there is a potential to increase systemic risk exposure to which the current regulatory regime is inadequately equipped to manage. However, future macroprudential policy should be designed so as not to unnecessarily hinder non-bank financial institutions, but rather to develop more effective methodologies for measuring banking institutions’ exposure to exogenous risk. n


60

Spring 2017

A Cure for the ACA Mathieu Sabbagh

Columbia University The healthcare debate in the United States is more alive than ever, even after seven years of the passing of the Affordable Care Act. Matthieu Sabbagh’s article is poignant in that it reminds us of the false dichotomy between left and right the debate is usually framed upon. Recognizing merits in intervention and deregulation, he attempts to give a non-partisan assessment of the law, while recognizing potential biases coming from his libertarian ideology. He then explores alternative health care systems around the world and compares its goals and outcomes with those of the ACA. His proposal to model healthcare after the German Bismarck’s system as a positive compromise between private and public cooperation refreshingly upends the expectations of the healthcare debate. Whether the ACA is repealed or not in the near future, it is undeniable further reforms are needed in this sector. Contributions like Mathieu’s remind us that there might be better policies out there worthy of consideration. -G.C.J. “It will be repealed and replaced and we’ll know […] that’s what I do, I do a good job”, so were the words PresidentElect Donald Trump spoke during a 60 Minutes interview on November 13th. Elected on a platform which promised to repeal the Affordable Care Act (“ACA”), Trump benefited from a wave of rightwing populism aimed primarily at the bureaucracy of the “Washington Elite”. To that end, Obamacare came to embody all which Trump vowed to fight in his mission to “Drain the Swamp”: a government program which sought to benefit public welfare through compulsory payments but failed to live up to its initial promises of widespread prosperity. Even worse for its proponents, Obamacare premiums rose in nearly every state this past October, continuing a sad trend seen throughout the 44th President’s two terms which has seen coverage costs rise by nearly 50% since 2008. Though hundreds of analyses have already been undertaken as to why Obamacare has fallen short on its initial promises, the main overarching argument is the following: forcing more people to purchase healthcare (thus artificially raising demand) while keeping supply essentially stable at every price level (which can only be increased through large hands-on government programs, which are unlikely to

be approved by Congress) will inevitably lead to a higher equilibrium price. Yet, for all this brouhaha, I am not here to talk about Obamacare specifically so much as healthcare provision in general. First of all, I will not belie the fact that I am a classical liberal, and that I shudder at the thought of said large, hands-on government programs. My rule of thumb is that money is best spent by those to whom it shall be of service. Simply put, an invisible bureaucracy in Washington will never be able to buy me better insurance than I would have myself. But I too will not belie the fact that I recognize that money is often lacking for those poorest in America, even for such basic services as healthcare. Even today, nearly one out of every ten Americans remains uninsured. Though I cannot in good conscience state that we as are individuals entitled to another’s service, I do believe that it is the duty of an industrialized nation to ensure its people’s health. As such, I see no wrong in pursuing a government mandate to ensure universal healthcare coverage. Unfortunately, there are many who hold extreme views in regards to healthcare coverage. On one hand, even minor attempts to establish an insurance mandate, like the ACA, are cried out as “so-

Columbia Economics Review

cialism” or “tyranny” by an increasingly right-wing Republican base. On the other hand, support for a single-payer system like that of Great Britain is ever-growing within a Democratic Party that nearly nominated a self-described democratic socialist in 2016. Yet as a man of many nations and tongues, I have come to encounter systems that fall in-between this divide, mandates that promote competition and innovation inherent to a market system while ensuring total welfare akin to many public programs. While the ACA was still being pitched to Congress, The Physicians for a National Health Program, an interest group in favor of a single-payer healthcare system, prepared a wonderfully comprehensive list of the four main systems seen worldwide. I invite you to give it a look if you have time, though I will briefly summarize these here. The first, and simplest to explain, is the “Out of Pocket” system. Much like preACA America, this system is completely privatized. Each person pays for his or her own coverage through a private, often for-profit insurer. The second and third systems – the Beveridge model and the National Health Insurance model – are both similar in the sense that the government pays for every citizen’s coverage through taxes.


Spring 2017

61

People gathered outside the West Hartford, Connecticut town hall before a health care reform town hall meeting with U.S. Representative John B. Larson on 2 September 2009. Image from Wikimedia Commons

The only difference is that while in the former the government also runs most hospitals and directly pays medical professionals, in the latter taxes are given out to heavily-regulated private providers. In both cases, government remains the single payer. Admittedly, these are the main systems we have come to encounter through coverage of the current healthcare debate within the United States. Whereas the left clamors for a single-payer system, the right winces at the slightest thought of socialized medicine. However, amid this Manichean paradigm, there lies a fourth and extremely prevalent system – the Bismarck model. Named after the late nineteenth century German chancellor Otto Von Bismarck, this system will seem familiar to most Americans. Employees feed part of their salary into a payroll fund, which is subsidized by their employers according to their base income. This fund is then used to pay heavily regulated, non-profit private insurers. This coverage includes dependents, such as stay-at-home spouses and children. Those who cannot receive coverage through employment, like the retired or the long-term unemployed, received Medicare-type government coverage.

In a 2000 WHO Report, nations were ranked by the quality of their health system. Among the top 10 nations, only one – Spain – did not have a Bismarck system. All this begs a single question to be posed: why is this system so successful? What the Bismarck system achieves is the best of both worlds. On one hand, it ensures that the entirety of the population has some sort of basic health coverage. On the other hand, it promotes innovation and affordability through the inherent competition of a market system. In short, it ensures universal, majorityprivate insurance. I will not deny that this Bismarck system seems awfully familiar to the ACA. Here too, insurance is supposedly universal and majority-private, since citizens are forced to have some sort of coverage. But there are a few key differences in these two models. •The ACA for instance does not ban for-profit private insurers, who remain the overwhelming majority of providers. •The ACA does not provide for universal employer mandates, with partial existing ones having been delayed numerous times. •The ACA does not make for widespread competition – still today most providers are limited to the state level,

Columbia Economics Review

where options remain relatively few compared to the 700+ providers the average Frenchman can choose from. These three stated differences are only part of a larger divide between the current state of American health provision and the greatness it strives to achieve. Though a simple single-payer system might seem like the easy way out of pre-ACA madness (indeed, many comparably wealthy nations with single-payer systems rank higher than the U.S. on the WHO ranking), the Bismarck model not only seems more efficient but may also be more feasible. By ensuring that healthcare remains essentially out of government hands and that it becomes universally available, it meets the grievances of conservatives and liberals alike. Obamacare may be “a disaster” as Trump says so eloquently, but reforming rather than repealing it seems not only more politically viable but better for the American people on a whole. If his initial decisions to retain some key parts of the ACA are anything to go by, the 45th President of the United States may well have just that in mind. In waiting for these coming changes, Americans must remain steadfast in their pursuit of affordable coverage for all, no matter how difficult it may seem. n


Spring 2017

62

ENVIRONMENTAL POLICY COMPETITION

Winners

First Place Danielle Desieroth, Emma Gomez, Charles Harper, Ricardo Jaramillo Columbia University Second Place Eduardo Despradel, Emil Mella Columbia University Third Place TJ Ball, Julien Morgan, Cole Sikon Cornell University

On October 16, 2015, the U.S. Department of the Interior effectively canceled all future auctioning of Artic offshore oil leases in the Chukchi and Beaufort seas for 2016 and 2017, rejecting major oil companies’ petitions to extend their pre-existing leases in other areas of Alaska’s Arctic Ocean. This administrative decision signaled a halt on future expansion of offshore drilling in the Arctic region, posing pressing political, economic, and environmental complications the many economic and political actors in the region. While several political and environmental groups have lauded the decision, oil companies and their business partners have been angered by the abrupt end to future drilling opportunities. The Columbia Environmental Policy Competition asked A slide from the winning presentation. participants to play the part of director of the U.S. Department of the Interior in 2017, tasked with the mission to preserve and extend the 2015 drilling guidelines. Taking into account the competing concerns of oil companies, environmentalists, governmental officials, and economists, our participants designed contemporary offshore Arctic drilling policies that sought to minimize economic costs and environmental harm, amidst the serious political and social concerns regarding the topic. Our first place winners were Danielle Desieroth, Emma Gomez, Charles Harper, and Ricardo Jaramillo from Columbia University. Their project, titled “Rethinking US Arctic Policy: A Broad-based Approach” implemented a compromising policy proposal that called for stricter environmental protections on the political end, along with investments in infrastructure and clean technologies for drilling companies “to make the Arctic work for everyone.” Their three-pronged approach focused on targeted environmental protections, increased infrastructure and community investments, and revamped technologies for oil spill prevention and clean up. The judges were most impressed with their sophisticated cost and benefit analysis in examining both the counterfactual, in which Arctic drilling were to continue as projected without these new offshore drilling practices, and the new projections that result from their policy recommendations. Moreover, their pointed attention towards not only the political tension surrounding the issue but also the social dimension of revitalizing affected coastal and indigenous communities demonstrated a rich understanding of the complex social context of the region. Around the world, mitigating climate change has now become a more vital endeavor than ever. As Columbia students continue to lobby for President Lee Bollinger to divest from the top 200 publicly traded-fossil fuel companies, 143 countries, including the world’s biggest polluters, recently came together to reduce global greenhouse gas emissions by signing the Paris Agreement. We applaud this year’s submissions for their contributions, scholarship, and innovation in examining these pressing issues. We would also like to thank the Earth Institute at Columbia University for their generous support in judging our competition. We look forward to reading more policy recommendations and analyses in the near future!

Columbia Economics Review


Spring 2017

COLUMBIA ECONOMICS COMPETITION

Winners First Place Sharleen Yu, Yuyan Lin, Xiaobin Chen, Bennie Chen Columbia University

On June 23, 2016, the British people voted on whether the UK should leave or remain in the European Union. “Leave” won by a margin of 52% to 48% with the referendum turnout around 71.8% and more than 30 million people voting. The large division in voting pattern, split by age, race, income, and geography, led to a fractured UK that now must grapple with the realities and complexities of its departure from the European Union. On the other hand, the European Union and its constituent nations must now also reconsider the purpose and mission of the union. In our inaugural Columbia Economics Competition, we asked participants to take the perspective of either the British government or the European Union, and propose a comprehensive policy plan to enact Brexit. The policy proposals were judged by their breadth in analyzing the economic and political ramifications in regards to domestic and foreign affairs, as well as any social or cultural consequences that might impact the British identity. Our winners were Sharleen Yu, Yuyan Lin, Xiaobin Chen, and Bennie Chen from Columbia University. Their project, entitled “Post Brexit Policy Recommendations from Economic, Social and Political Aspects” demonstrated a practical and balanced approach that sought to elevate the UK’s status as an economic and political powerhouse, while maintaining their many social, cultural, and economic ties with European allies. Their project focused on preserving the strength of the British economy and global political reach, while providing policy recommendations for how to reunite the fractured British population and amend tensions with the European Union. On the economic front, they provided financial projections for the British pound, and detailed potential trade partnerships and plans for foreign investment. On a social dimension, they focused on the issue of immigration and cultural assimilation with subsequent policy recommendations in education and social security. To conclude, they examining issues of sovereignty in regards to Scotland and Northern Ireland and proposing new political mechanisms for future European relations. Discussing economic issues from a political, social, and cultural The winning group with competition judge Professor Jenik Radon perspective has always been one of the main missions of the Columbia Economics Review, and we seek to engage students from around the country on such topics through our competition. We congratulate this year’s submissions for their scholasticism and we thank them for their confidence in our first installment of this competition. We would also like to thank Professor Jenik Radon, from Columbia University, for reviewing the submissions and judging the presentations along with the CER board. Until next year, we look forward to engaging in political and economic policy discussion through CEC and our many other CER platforms!

Columbia Economics Review

63


CER Journal & Content On li n e a t

columbiaeconreview.com Read, share, and discuss Keep up to date with web-exclusive content

Columbia Economics | Program for Economic Research Printed with generous support from the Columbia University Program for Economic Research


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.