George Wythe Review Fall 2017

Page 1


Editor-in-Chief: Christian McGuire Associate Editor: R. Shane Roberts, Jr. Publication Editor: Keith Zimmerman Research Editor: Marina Barnes Assistant Editor: Christopher Baldacci Faculty Supervisor: Dr. Michael Haynes

PATRICK HENRY COLLEGE Purcellville, Virginia Copyright © 2017 ISSN 2153-8085 (print)














The George Wythe Review is an undergraduate journal dedicated to the integration of faith and reason in American domestic public policy. The editors of this journal recognize that contemporary domestic public policy is navigating the uncharted waters of rapidly advancing technology, an increasingly globalized political environment, and a bureaucratic federal government. The journal is a response to this climate, providing undergraduate students at Patrick Henry College with a venue to engage this climate through quality academic papers. In the vein of the journal’s namesake, the editors are committed to fostering an environment for discussion that enhances the missions of both the American Politics and Policy Program and Patrick Henry College. The George Wythe Review is published twice during the academic year by the American Politics and Policy Program of Patrick Henry College. Studies in the journal do not necessarily represent the views of Patrick Henry College, the editors, or the editorial board. The responsibility for opinions and the accuracy of facts in the studies rests solely with the individual authors. Direct all correspondence to the address below. Patrick Henry College 10 Patrick Henry Circle Purcellville, VA 20132 (540) 338-1776 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopying, recording, or otherwise—without the prior written permission of the copyright owner. Authors of the respective essays in this publication retain copyright privileges. Copyright © 2017• Printed in the United States of America.

CO NTENTS ________________ Volume 9 • No. 1 Fall 2017

Letter from the Editor


Armageddon: An Analysis of Nuclear Brinkmanship as a Diplomatic Tool


Michael Dingman This study utilizes both foundational and contemporary research in the field of nuclear crises to evaluate nuclear brinkmanship as a tool of negotiation. Using quantitative empirical data and qualitative analysis, this study seeks to establish whether brinkmanship is a useful tool of diplomacy or an unnecessary risk.

21 Defending Our Defenders: The Search for Legislation to Decrease Military Sexual Assault J. Michael Patton This study examines sexual assault reporting rates in the Canadian military after legislation similar to the Military Justice Improvement Act (MJIA) was implemented. The study also compares recent rates of reporting in the Canadian military and the United States military.

39 Institutions and Environment: The Environmental Kuznets Curve, Prosperity, and Civil Institutions Johanna Christophel This study proposes and tests the hypothesis that good environmental policy requires a combination of economic prosperity, economic freedom, and stable, free civic institutions. The hypothesis is primarily tested through comparative analysis of China and the United States to determine whether economic prosperity is the sole and exclusive determinant of environmental protection.

The Ethical Implications of Mandated Vaccinations: A Utilitarian and Biblical Analysis


Kianna Smith This study considers the ethical implications of government-mandated vaccinations from both utilitarian and Christian perspectives. It examines the evidence on both sides of the debate for vaccines’ safety and efficacy, as well as considers the value of parental rights.

70 How Much is a Life Worth? An Analysis of the Problem of Valuing Human Life in Public Policy Thomas Siu This study first evaluates the utilitarian measures for calculating the value of a statistical life (VSL), including both the willingness-to-pay (WTP) and human capital (HK) approaches, and how this approach plays out, particularly when used to evaluate federal regulatory policy. It then turns to an examination of the casuistry of jury decisions in wrongful death litigation and whether noneconomic damage caps interfere with the proper functioning of that casuistry.


Letter From the Editor Dear readers, I welcome you to this edition of the George Wythe Review with a tinge of bittersweetness. This letter marks my final project for the journal as editor-in-chief; my successor, Christopher Baldacci, will address you in these letters from this time forward. As I have said here before, I am confident that I am leaving you in good hands. The studies that fill the pages of this Review have a broad yet clear theme: the influence of morality on policy. In my estimation, few topics could be more fitting to the times. In an era when many politicians - even self-professed Christians - seem all too ready to jettison civility, natural law, and stewardship from the public sphere, these authors thoughtfully examine what it means to craft ethical public policy. Their reflections refresh my hope for the future of American discourse. Our first study was conducted by Mr. Michael Dingman. Mr. Dingman, a recent entrant to the University of Virginia Law School, considers a topic ripe with ethical dilemmas: nuclear brinkmanship. The timeliness of the topic is matched with excellent qualitative and quantitative analyses of nuclear crises since the dawn of the atomic age. Mr. Dingman’s historical investigation yields important conclusions about how nuclear brinkmanship should—or should not—be used. In our second study, Mr. J. Michael Patton—who is slated to join the Review’s board next semester as Associate Editor—examines the potential effects of military justice reform. Specifically, he addresses the claim that the Military Justice Improvement Act (MJIA), if passed, could reduce instances of sexual harassment and assault in the military. To evaluate the potential effects of the MJIA, Mr. Patton looks north. Canada implemented similar changes to its military justice system in the 90s, with surprising results. The experience of our friendly neighbors may have moral implications for American policy-makers. Ms. Johanna Christophel, another newly-minted student of the law, authored our third study. She focuses on the relationship between the environment and economic development, using Chinese and American environmental quality as case studies. Most literature today postulates a so-called environmental Kuznets Curve—a hypothetically positive relationship between environmental quality and high rates of economic development. Her study adds nuance to this common claim, implying that wealth is not the only factor in good environmental stewardship.

Fall 2017 • 1


Ms. Kianna Smith wrote our fourth study, which investigates the morality of mandatory vaccination. Using both utilitarian and biblical analyses, she sorts through the competing rights-claims with rigor and nuance. While controversial, Ms. Smith’s conclusions are nevertheless an informative contribution to an oftenemotive public discussion. Last, but certainly not least, Mr. Thomas Siu returns to the Review with characteristic academic rigor. He studies the morally complex systems which regulators and jurors use to determine the statistical value of a human life. By categorizing and evaluating various approaches using casuistry and utilitarianism, he brings greater moral clarity to an ethically ambiguous subject. As always, several thanks are in order. First, I would like to thank the entire Review staff. Each of you have blessed me, and it has been an honor and a privilege to work alongside you. Special thanks go to the three men who first worked by my side: William Bock, Shane Roberts, and Keith Zimmerman. The memories we share will last a lifetime. I have said before that the incoming staff has my confidence. I would like to add that each of you also have my affection. Mike, Chris, Marina, and Abi—it was wonderful to be your editor-in-chief, but it has been much better to be your friend. I am confident that you will outdo me. Our benefactors merit mention as well. Thanks to Dr. Haynes, my mentor, my advisor, and my friend. His wit and devotion are second to none, and without him there would be no George Wythe Review. Thanks to our sponsors at the Collegiate Network and the Leadership Institute for their faith in our work. Finally, thank you to our readers. It has been a pleasure to serve you, and I look forward to joining your ranks. With my warmest regards, Christian McGuire Editor-in-Chief

2 • Fall 2017


ARMAGEDDON: AN ANALYSIS OF NUCLEAR BRINKMANSHIP AS A DIPLOMATIC TOOL Michael Dingman Abstract Nuclear brinkmanship has been a central aspect of diplomacy for the last 70 years. From the standoffs between the United States and Soviet Union during the Cold War to the modern-day tensions between India and Pakistan, brinkmanship has been at the center of the highest-profile crises of the modern age. The inherent risks associated with brinkmanship give extraordinary weight to the subject, and certainly its effectiveness as a diplomatic tool merits study. This study will utilize both foundational and contemporary research in the field of nuclear crises to evaluate nuclear brinkmanship as a tool of negotiation. Using quantitative empirical data and qualitative analysis, this study will seek to establish whether brinkmanship is a useful tool of diplomacy or an unnecessary risk. Considering the research available, this study determines that brinkmanship is generally unadvisable but that it may be an effective tool in some cases. ___________________________________

Fall 2017 • 3


Introduction Since the bombings of Hiroshima and Nagasaki at the end of the Second World War, nuclear weapons have primarily been used not as weapons of war but as leverage in high-level diplomacy. The Cold War period was marked by fluctuating tensions between the United States and the Union of Soviet Socialist Republics (USSR), although it never escalated into combat, proxy wars notwithstanding. This was in large part due to the principle of Mutually Assured Destruction (MAD) wherein both sides knew that a nuclear exchange would likely result in the end of all life on earth. While a decidedly morbid state of affairs, the threat of MAD maintained a relative peace during the period. Nuclear deterrence, rather than war, had become the primary method of confrontation between powerful nations with conflicting interests (Kroenig, 2013). In the post-Soviet world, the use of nuclear weapons as a negotiating tool has become significantly more complicated. Nine countries are known to have nuclear weapons: the United States, Russia, the United Kingdom, France, China, India, Pakistan, Israel, and North Korea. North Korea’s possession of nuclear weapons, combined with its desperate situation, makes it a volatile nuclear power. India and Pakistan are in a perpetual standoff over disputed regions such as Kashmir. With the chronic instability of the Pakistani state and the rise of the Hindu nationalist BJP party in India, the situation seems more tenuous by the day. Israel likely owes its existence to its nuclear arsenal, which has dissuaded neighboring Muslim states from attempting to conquer the small nation. However, the increasing radicalization of the Middle East casts into doubt the extent to which nuclear weapons will deter future terrorists following radical religious ideologies. As always, Russia perpetually antagonizes the United States and its NATO allies, two of whom have nuclear arsenals of their own (France and the UK). Finally, there is the possibility that terrorists could gain control of nuclear weapons for their own purposes, a concern exacerbated by Iran, a nation that has sponsored ideologically-aligned terrorists in the past and has also endeavored to develop their own nuclear program. In this state of affairs, nuclear brinkmanship remains a potent, though extreme, diplomatic tool. This study seeks to more precisely define nuclear brinkmanship before exploring its utility as a diplomatic instrument. The question this study seeks to answer is whether a greater willingness to engage in and escalate nuclear crises is an effective means by which to accomplish national objectives. To analyze this question, this study examines models and data analysis by the foremost experts in the field as well as targeted studies of specific instances of brinkmanship or crisisprone situations. In all, there are five possible answers to the question: nuclear brinkmanship is almost always ineffective, mostly ineffective, mostly effective, almost always effective, or there is no clear conclusion. The hypothesis of this study

4 • Fall 2017


is that nuclear brinkmanship is mostly effective in that it usually accomplishes the objectives of the side that is more willing to initiate or escalate a brinkmanship situation.

Literature Review One of the foremost scholars of modern quantitative brinkmanship analysis is Robert Powell. He has written a number of books including the foundational work, Crisis Bargaining, Escalation, and MAD (1987). In this book, Powell established key definitions, variables, and models for brinkmanship game theory. He also defined the possible outcomes that could occur depending on the actions of each side of a nuclear crisis. In 1988, Powell authored Nuclear Brinkmanship with Two-Sided Incomplete Information (1988), which further established the fundamental model for the formal study of nuclear crises and brinkmanship and focused on the effects of differences in resolve, values, and information. Powell (2003) later considered how nuclear proliferation and nuclear missile defense systems affected deterrence and a country’s willingness to use brinkmanship. These variables helped explain the differences between the nuclear crises between the United States and the Soviet Union as opposed to crises between Pakistan and India. Powell also considered the doctrine of retaliation, which was sometimes seen as an alternative tactic to brinkmanship, although it has historically played out in much the same way. Powell (1989) found that limited retaliation is a poor doctrine when dealing with larger nuclear powers but may be viable if conducted correctly against lesser nuclear powers. However, if conducted incorrectly, it increases the chance of massive nuclear exchange. Fearon (1994) established additional variables for consideration in quantitative nuclear crisis analyses. Fearon modeled the effects that “audience costs,” or public reactions, to crises have on political leaders and their decision-making, finding that more democratic nations have higher audience costs as their leaders are more accountable to the populace. This reduces the amount of options that these leaders have in crisis situations, but it also lends additional credibility to these leaders when they threaten to exercise one or more of their remaining options. On the other hand, the threats of autocratic leaders are diluted, but they retain a plethora of options in a crisis due to reduced reliance on domestic approval. In Nuclear Superiority and the Balance of Resolve: Explaining Nuclear Crisis Outcomes (2013), Matthew Kroenig built upon Powell’s work and provided a comprehensive empirical examination of nuclear crises and their outcomes. Specifically, Kroenig examined 20 nuclear crises from their origin to their resolution from 1950 to 2001. Most previous scholarship on nuclear crises made the erroneous assumption that nuclear superiority is ultimately irrelevant because both parties

Fall 2017 • 5


were assumed to have second-strike capabilities. Kroenig corrected this oversight. Most significantly, Kroenig studied the outcomes of crises themselves quantitatively. Using Powell’s abstract models, consulting other researchers’ studies of specific incidents, and examining a broad data set of nuclear crises, Kroenig was able to effectively establish trends in how nuclear crises begin, unfold, and resolve. Additionally, Rauchhaus (2009) did a preliminary study on nuclear asymmetry. Although the principal purpose of Rauchhaus’ study was to evaluate whether or not nuclear weapons are a cause of peace, Rauchhaus also found that nuclear asymmetry correlates to a willingness to engage in crises and open conventional conflicts, while nuclear symmetry reduces the likelihood of engaging in conventional conflict and reduces the overall intensity of crises. Several authors have looked at specific case studies in nuclear brinkmanship. Kim (1995) looked at the North Korean crisis in 1994 and showed how possessing nuclear weapons significantly impacts a nation’s ability to negotiate. Kim concluded that even a nation which is vastly inferior in both conventional and nuclear capability can make up for this capability deficit with resolve. Kapur (2008) examined the effect of India and Pakistan’s nuclear arsenals on their interstate conflict and found that nuclear weapons have had a destabilizing effect on the region. He found that Pakistan has been willing to act recklessly because it can neutralize India’s conventional advantages by utilizing the Pakistani nuclear arsenal. However, this increases escalation, as Pakistan cannot escalate conventionally due to its inferiority. Kapur concluded that a nation committed to challenging a status quo it finds objectionable will be much more willing to do so openly if it has a nuclear arsenal backing its interests. Trachtenberg (1985) reached a different conclusion from Powell and Kroenig on the Cuban Missile Crisis. Specifically, Trachtenberg concluded that the Cuban Missile Crisis was not a contest of resolve since there was no clear winner of the contest and neither side desired to escalate the crisis to show their resolve.

Data and Methods Before analyzing it as a diplomatic tool, it is important to understand the concept of nuclear brinkmanship. Brinkmanship is often likened to a game of “chicken” in which two nuclear powers continuously escalate the threat until the one with greater resolve wins. While this certainly makes the concept easy to explain, it is excessively simplistic and a more thorough understanding is necessary. Brinkmanship occurs when one nuclear power considers the status quo to be so unacceptable that it must be challenged. Only if another nuclear power defends this status quo does a crisis arise and brinkmanship ensue. A crisis occurs whether or not either adversary threatens to use a nuclear strike, since the existence of nuclear

6 • Fall 2017


weapons and the possibility of their use alone is enough to have a decisive effect on negotiations (Kroenig, 2013). With this understanding, it is also necessary to define “crisis” due to its central importance to this study. Matthew Kroenig used the definition from the International Crisis Behavior Project that a crisis is an “interstate dispute that threatens at least one state’s values, has a heightened probability of military escalation, and has a finite time frame for resolution” (2013, p. 152). While this definition is sufficient for the purposes of this analysis, to avoid confusion regarding terminology, this study will replace “values” with “interests,” since values could be taken to mean anything from abstract national principles of freedom, order, honor, et cetera, or could mean something the state values such as a certain piece of territory or another asset. Powell and Kroenig have explained the dynamics of brinkmanship and variables which affect how brinkmanship plays out. Brinkmanship is best examined as a sequential crisis, with each sequence defined as a crisis dyad or dyad crisis. A dyad occurs when one of two nations, engaged in a crisis, directs a hostile action at the other. With each successive dyad, an entirely new set of variables may arise, and thus international crises and their central variables can only be reliably measured at the dyadic level (Kroenig, 2013; Powell, 1987, 1988). At each dyad a different status quo is being negotiated between the two powers, there are various levels of misperception, and there are various levels of resolve on each side depending on the desirability of the status quo being considered. There are four ways a crisis can resolve for a nation: win, status quo, lose, and disaster. Winning broadly means gaining a new, more favorable status quo, status quo means the conflict resolves without change (a de facto defeat for the original challenger), defeat is when the new status quo is less desirable, and disaster is an accidental nuclear exchange (Kroenig, 2013). The likelihood of disaster increases at various rates with each passing dyad depending on the actions of the two parties in a conflict.

Research and Analysis Brinkmanship as Differences in Resolve The fundamental difference between one's resolve and misperception of the adversary’s resolve is the driving factor in any escalation of brinkmanship, but by its very nature brinkmanship will progressively threaten more of the status quo (Powell, 1987, 1988). No rational actor can credibly threaten nuclear annihilation since this would compromise that actor’s own existence. Thus, the risk that nations take in the context of brinkmanship is that inadvertent escalation or accidents could lead to nuclear war, not intentional action (Kroenig, 2013). This risk is compounded by the incomplete information that both sides possess about the other’s capabilities, intentions, and resolve (Powell, 1987). Thus, a crisis is resolved one of three ways: by

Fall 2017 • 7


one of the two sides estimating that the risk of disaster is too great and capitulating while the other stands firm, by both sides estimating that the risk is too great, resulting in compromise, or by both sides standing firm to the point of accidental nuclear exchange and disaster (Powell, 1987). Consider, for example, the 1994 crisis when North Korea, supported by China, announced its intentions to withdraw from the Treaty on the Non-Proliferation of Nuclear Weapons (Kim, 1995). North Korea used its newfound nuclear status to negotiate in a greatly outsized manner compared to its adversary, the United States. By the end of the crisis, North Korea had accomplished a greatly unequal bargain in which North Korea would halt its more blatant nuclear programs, remain a signatory of the Non-Proliferation Treaty, submit to UN inspections, and meet other light requirements. In exchange, the United States would finance and supply two light-water reactors, deliver oil to the North Koreans in the amount of 500,000 tons annually during a set period, reduce trade barriers, establish diplomatic relations, and provide a formal assurance that the United States would not use nuclear weapons against North Korea without prior nuclear provocation. Essentially, North Korea exchanged “Washington’s maximal quids for Pyongyang’s minimal quo” (Kim, 1995, p. 20). This conclusion was only achievable through a disparity in resolve. North Korea was conventionally much weaker than the United States, and its nuclear program was still in its infancy. Indeed, some nations disputed that North Korea even had nuclear weapons as they did not conduct widely-known tests until 2006. However, it is extremely likely that North Korea had at least rudimentary nuclear weapons as early as 1992, and possibly earlier (Kim, 1995). In this crisis, North Korea accurately estimated the United States’ resolve in the matter and perceived the United States' unwillingness to risk nuclear annihilation which may have been precipitated by a strike on North Korea and a total nuclear response by the Chinese. North Korea, on the other hand, had very little to lose and thus had great resolve. Any sort of response from the United States would have at the very least destabilized the regime’s power, and thus it was willing to go to extreme lengths to protect that power (Kim, 1995). Because of North Korea’s greater resolve, it was willing to enter into and escalate the crisis to a successful conclusion. As the conflict escalated, so too did the risk of accidental nuclear exchange, and the lesser resolve of the United States resulted in lopsided concessions to the North Koreans. Some scholars, however, disagree with the fundamental characterization of brinkmanship as a competition of resolve. Trachtenberg wrote that, “in practice, the more subtle official theories about nuclear war-fighting did not have much of an effect on American policy” during the Cuban Missile Crisis (1985, p. 162-163). However, Trachtenberg’s objection was based on his assertion that “the balance [of resolve] was unequal, but not so unequal that it makes sense to view the crisis

8 • Fall 2017


as a simple ‘contest’ with a clear victor” (1985, p. 162). He backed his claim with historical documents showing that “no one wanted to keep upping the ante, to keep outbidding the Soviets in ‘resolve,’ as the way of triumphing in the confrontation” (Trachtenberg, 1985, p. 162). While the historical evidence that Trachtenberg examined certainly backs this claim, his claim and his conclusion were inadequately linked. A contest of resolve, as established by Powell, Kroenig, and others, does not need explicit escalation or “outbidding” of any sort. Powell in particular pointed out that, while brinkmanship is indeed sequential, simply remaining at a certain level of escalation from one stage of a crisis to the next increases tension and the possibility of accidental nuclear exchange (Powell, 1987, 1988). The amount of resolve needed before backing down does not stay the same if there is no explicit escalation but rather increases steadily as the crisis continues regardless of any further escalation. Of course, escalation would increase the resolve needed to continue to a greater degree than this gradual increase, but it is not strictly necessary. Furthermore, both Powell and Kroenig established that a contest of resolve need not result in a “clear victor,” as Trachtenberg asserted. An entire spectrum of outcomes exist, ranging from total capitulation or compromise to global annihilation which may result from a contest of resolve (Kroenig, 2013; Powell, 1987, 1988). Powell’s model, the standard for brinkmanship theory, provided several conclusions. First, crises involving a significant conflict of state interests are less stable and more prone to escalation, while crises involving more peripheral issues are more stable. Second, there is no guarantee that the state with the greatest resolve will win the crisis or that one side is less likely to escalate because its adversary’s resolve is greater. In his model, Powell showed that less resolute nations are less likely to challenge the status quo, but if they do challenge it, they are more likely to meet resistance with escalation (Powell, 1987, 1988). While no immediate motives can be drawn from Powell’s model, it is plausible that an irresolute nation finding itself in an unpleasant bout of brinkmanship over a peripheral issue may be more prone to attempting to bluff its way out of the situation in order to resolve it quickly. Third, describing brinkmanship solely as a contest of true resolve is misleading and may “obscure as much as it clarifies” (Powell, 1988, p. 171). States with lower resolve may win the crisis due to the misperception of resolve. What matters is not how resolute a nation is but how resolute it appears to its opponent. Fourth, misperception is fundamental to any study of brinkmanship. If misperception could be eliminated, all parties would have perfect knowledge of each other’s resolve and capabilities, and as a result no crises would occur. Fifth, while a nation with a high stake in the status quo might seem less likely to challenge it than one with a lower stake, that is not necessarily the case. A challenger may use its greater stake in the status quo to give a defender the impression that it has greater resolve. If its bluff is called, however, such a challenger is more likely to capitulate (Powell, 1988). Finally, challengers are more

Fall 2017 • 9


likely to be irresolute than resolute and generally are unlikely to challenge the status quo if they are resolute. Thus, crises that do occur are more likely to be severe than not (Powell, 1988). Brinkmanship and Limited Retaliation The doctrine of limited retaliation is another factor that must be taken into account when considering brinkmanship. The doctrine states that, if a nation threatens limited nuclear action, such a threat would actually be credible since it does not necessitate total annihilation. If a threat is ignored, theoretically a nation could launch a limited strike with some hope that the response would not be a massive nuclear strike (Powell, 1989). Limited strike threats have to satisfy two criteria. First, they must impose a high enough cost to encourage the adversary to back down. Second, they must impose a low enough cost that, if the strike were actually carried out, the adversary would retain something it is unwilling to lose and thus would not risk total annihilation (Powell, 1989). Limited retaliation as a doctrine originated as an attempt to solve a credibility problem in the doctrine of massive retaliation and second-strike capabilities (Powell, 1989). Massive retaliation required that nuclear powers protect their entire spectrum of interests, from the most periphery to the most vital, by threatening total nuclear attack in response to a challenge to those interests. However, this doctrine was deeply flawed as it was not credible that a nation would risk nuclear annihilation over peripheral interests (Powell, 1989). As an alternative, states began to practice brinkmanship and the principle of limited retaliation. Nuclear crisis scholars rightly identify key flaws in the strategy of limited retaliation. If each state in a crisis were to destroy one city or other significant target at a time, the end result may very well be the same as a massive nuclear launch (Powell, 1989). Indeed, limited retaliation may make this result more palpable, as it is a steady escalation of destruction rather than an “all or nothing” scenario. Limited retaliation raises the risk of accidental massive nuclear exchange, and thus Powell modeled limited retaliation as a game of sequential bargaining in the same manner as his previous brinkmanship models (1989). Powell’s analysis found that the use of limited retaliation as a form of brinkmanship is only effective against a defender with limited options for retaliation. States become less likely to escalate the longer and more extreme a crisis becomes. Powell’s most significant new finding, however, was how counterforce strategies play into brinkmanship. Counterforce strategies are where one nation targets the offensive capabilities of another nation with nuclear strikes in order to reduce that nation’s ability to launch retaliatory strikes. While these options are incapable of stopping major nuclear powers from inflicting retaliatory damage, they may be successful when dealing with less capable nuclear powers. Powell’s model

10 • Fall 2017


showed that a large reduction in a nation’s retaliatory capabilities due to counterforce strikes makes the probability of nuclear exchange smaller. However, smaller and less effective counterforce strikes increase the possibility of nuclear exchange (Powell, 1989). A nation will likely use its retaliatory capabilities rather than wait for them to be destroyed if they are being targeted, and less effective counterforce strikes will leave enough capabilities in place to make this retaliation significant. Additional Factors that Influence a Nation’s Resolve Powell also examined the effects that a missile defense system has on brinkmanship, specifically in the context of the United States. The findings of this study are significant in that they not only track the probability of a nuclear exchange but also track the likelihood that the United States would be struck by a nuclear weapon. Powell (2003) found that nuclear missile defense systems proportionally increase the chances of being struck by a nuclear missile by increasing the chances of a massive nuclear exchange, at least until the effectiveness of the missile defense system nears one hundred percent. Powell's study concluded that missile defenses create higher resolve within the United States due to the perception of less risk. This increased resolve means that the United States is more willing to initiate and escalate crises the more effective its defense system is, which in turn increases the likelihood that the United States will experience brinkmanship that spirals out of control (Powell, 2003). While the United States would be more likely to achieve its ends due to its increased willingness to escalate crises, the tradeoff is a greatly increased likelihood of a massive nuclear exchange. Powell’s research then supports the conclusion that increased willingness to engage in brinkmanship and escalate brinkmanship situations correlates to an increase in the chances of success, with the caveat that this increased escalation is dangerous when crises are entered into with higher frequency. Additional work by James Fearon established a relationship between how democratic a government is and its levels of resolve. Fearon operated under the assumption that crises are “public events carried out in front of domestic political audiences,” similar to the Cuban Missile Crisis (1994, p. 577). This is an important caveat since the variable relationships Fearon established were then linked to this assumption, but models may incorporate Fearon’s variables while adequately controlling for non-public confrontations as well (Fearon, 1994; Kroenig, 2013). Fearon justified this assumption on the grounds that leaders of nations in a nuclear crisis have a clear, private picture of their own resolve. However, they also have a strong incentive to misrepresent their level of resolve in order to influence the resolution of the crisis in their favor. According to Fearon, the only way to accomplish this misdirection is by “going public” with threats, mobilization, and other actions which “focus the attention of relevant political audiences and create

Fall 2017 • 11


costs that leaders would suffer if they backed down” (1994, p. 586). Fearon found that domestic audiences historically punish leaders more for escalating and then capitulating as opposed to not escalating in the first place. The variable representing this domestic backlash, or “audience cost,” directly influences the resolve of a particular nation’s leaders in a nuclear crisis situation. Democratic states have a more powerful domestic audience than more autocratic states, and thus the leaders of democratic states have more to lose from audience cost than autocratic leaders. Therefore, democratic states require a lower level of escalation to broadcast their intentions and by extension increase their resolve (Fearon, 1994). While democracy limits a leader’s options, it lends additional credibility to that leader’s statements and actions, potentially exposing a resolve imbalance that decides the crisis. Autocratic leaders, on the other hand, need not fear the reaction of the public as much and have many more options available to them. This has the effect of diluting their credibility as they often have many more alternative actions to the ones they threaten (Fearon, 1994). Quantitative Analysis of Nuclear Asymmetry As Kroenig pointed out in his 2013 study, most research on the subject of brinkmanship is “dominated by formal theoretical models and qualitative studies of a few high-profile cases,” and thus his work provides novel and crucial quantitative research for this study's analysis of the effectiveness of brinkmanship in achieving desired outcomes (p. 143). In his research, Kroenig modified Powell’s brinkmanship model and Fearon’s political variables to include strategic nuclear dynamics. Prior research that had been conducted on nuclear asymmetry dealt with it as a periphery issue. The best example is probably Robert Rauchhaus’ study of the nuclear peace hypothesis. While Rauchhaus was primarily concerned with questions beyond the scope of this study’s hypothesis, his work produced some helpful conclusions which were explored in greater detail by Kroenig. Rauchhaus established that symmetrical nuclear powers are less likely to enter into open conflict with one another but are also more likely to enter nuclear crises at lower levels. This is likely due to their reluctance to use conventional forces, whereas non-nuclear or asymmetric nuclear powers are more willing to use conventional forces to deal with low-level crises before escalating to the nuclear level. This was confirmed by Rauchhaus when he established that nuclear asymmetry correlates to a higher chance of “crises, uses of force, fatalities, and war” (Rauchhaus, 2009, p. 260). Nuclear weapons do not affect how frequently conflicts occur between nations, but they do affect the nature of those conflicts. Symmetric nuclear powers more easily resort to brinkmanship rather than conventional violence but generally keep the intensity of crises as low as possible due to the principle of mutually assured destruction. Asymmetric nuclear powers are more willing to ignore brinkmanship and engage in conventional conflict

12 • Fall 2017


but are also more willing to engage in brinkmanship crises in general (Rauchhaus, 2009). Formal models of brinkmanship prior to Kroenig’s study assumed that all parties have secure second-strike capabilities and thus the strategic balance is irrelevant, but many nuclear states do not have such capabilities (Kroenig, 2013). To account for this balance and its significance in brinkmanship, Kroenig drew on two facts established in previous scholarship. First, not all nuclear wars are equally devastating. Kroenig (2013) cited the testimony of former Secretary of the Air Force Harold Brown, who stated that even 25 percent casualties [on the Soviet side] might not be enough for deterrence if US casualties were disproportionately higher – if the Soviets thought they would be able to recover in some period of time while the US would take three or four times as long, or would never recover, then the Soviets might not be deterred. (p. 149) If the stakes are high enough, there is an amount of nuclear devastation that one or both sides of a crisis may deem acceptable. Since states do not threaten nuclear war in instances of brinkmanship but rather the risk of accidental nuclear war, a nation which deems nuclear war acceptable will certainly see less risk in accidental nuclear war. It would be preferable to accomplish the state's objectives without nuclear catastrophe, but the catastrophe itself is not an intolerable conclusion to the crisis. The second factor is that nuclear superiority proportionally reduces the costs of a nuclear war (Kroenig, 2013). Using nuclear weapons in counterforce strikes to destroy the nuclear weapons or delivery systems of an opponent limits the damage an opponent can do in a nuclear exchange. States which have nuclear superiority have more firepower at their disposal and thus greater ability to target the nuclear capabilities of their opponent. States without nuclear superiority may be forced to choose between targeting the nuclear capabilities of an opponent or other targets such as population centers, in essence choosing between causing damage to one’s opponent and reducing the damage one’s opponent can do to oneself. There is strong evidence that the question of nuclear superiority influenced significant historical brinkmanship events. During the Cuban Missile Crisis, US officials including the Chairman of the Joint Chiefs and the Secretary of State used the United States’ nuclear superiority as a reason to not back down, while the Soviet Deputy Minister of Foreign Affairs alluded that the United States would not be able to repeat the incident once the Soviets achieved parity (Kroenig, 2013). There is similar evidence pertaining to the 1999 Kargil Crisis between India and Pakistan,

Fall 2017 • 13


where India’s nuclear superiority and its effect on the outcome was likely significant. As former Indian Defense Minister George Fernandes noted, in the event of a nuclear exchange, India “may have lost part of [its] population,” but “Pakistan may have been completely wiped out” (Kroenig, 2013, p. 151). This effect does not deter inferior states from challenging the status quo, and indeed conventionally inferior states will seek to gain nuclear weapons as a means to engage in crises on the nuclear level. As seen in the earlier example of North Korea in 1994, and in over a decade of Indo-Pakistani brinkmanship, an inferior power asserted itself against a superior power despite its inferiority. In North Korea’s case it succeeded, but Pakistan has not met with such success (Kapur, 2008; Kim, 1995). Kapur found that small nations in opposition to nations with nuclear superiority seek to become dangerous and destabilize the regions in which they reside. Despite how unlikely it is that Pakistan will defeat India in an instance of brinkmanship, Pakistan is willing and eager to enter brinkmanship situations. This is not because it seeks to win a crisis in the traditional sense, but rather because it seeks to destabilize the region, extend its influence, and undermine the status quo (Kapur, 2008). A nuclear Iran would likely take a similar stance, using its nuclear arsenal to shield itself from conventional reprisal while seeking to destabilize the region. It is likely that inferior actors who are aware of their inferiority do not enter into nuclear crises with the intention of winning them but rather with the intention of creating general destabilization (Kapur, 2008). While Kroenig did not take the possibility of an indirect and non-explicit set of objectives into account and no quantitative study on this possibility has been conducted, it may be an important factor to consider in post-Soviet brinkmanship situations. Kroenig’s study of nuclear crises created an original data set from the International Crisis Behavior Project’s list of crises. This data set yielded 20 nuclear crises from 1945 to 2001: Korean War (1950), Suez Crisis (1956), Berlin Deadline (1958), Berlin Wall Crisis (1961), Cuban Missile Crisis (1962), Congo Crisis (1964), Six-Day War (1967), Sino-Soviet Border War (1969), War of Attrition (1970), Cienfuegos Submarine Base Crisis (1970), Yom Kippur War (1973), War in Angola (1975), Afghanistan Invasion (1979), Able Archer (1983), Nicaragua/MIG21S (1984), Kashmir (1990), Taiwan Strait Crisis (1995), India/Pakistan Nuclear Tests (1998), Kargil Crisis (1999), and India Parliament Attack (2001) (Kroenig, 2013). Kroenig divided victory and defeat in binary fashion, with victory being a state achieving its basic goals, like the United States in the Cuban Missile Crisis, and defeat being a failure to achieve those goals due to compromise, stalemate, or clear defeat. Within the twenty crises, Kroenig identified 52 crisis dyads. When cross tabulating for nuclear superiority and crisis outcomes, the data shown in Table 1 is created.

14 • Fall 2017


Table 1: Nuclear Crisis Dyad Outcomes. Superiority (Yes or No) vs. Percentage Outcome (Victory or Defeat). Of the 52 crisis dyads, 18 (35%) resulted in victory while 34 (65%) resulted in defeat. In each dyad, there was one side with nuclear superiority and one side with nuclear inferiority, resulting in 26 crisis dyads for superior and inferior sides of the various crises (Kroenig, 2013). When a nation had nuclear superiority, it won 14 (54%) and lost 12 (46%) of the 26 dyads. When a nation had nuclear inferiority, it won 4 (15%) and lost 22 (85%) of the 26 dyads. When examining crises as a whole, the superior power has the upper hand in victories, while the inferior power’s share of victorious dyads is even less than the amount of stalemate dyads (Table 2). Finally, the overall total of victory and defeat rates comparing superior and inferior powers shows a clear advantage for the superior power (Table 3).

Table 2: Outcome by Crisis. Victor (Superior, Inferior, or None) vs. Percentage of Victories.

Fall 2017 • 15


Table 3: Dyadic Victory and Defeat Rates Overall. Victory or Defeat Rate vs. Percentage of Dyads. Out of the overall 52 dyads, nations with nuclear superiority won 27 percent and lost 23 percent of the dyads, and nations with nuclear inferiority won 8 percent of the dyads and lost 42 percent. This means that strategically superior nations achieve victory at a rate 19 percent greater than inferior nations across all dyads. Two conclusions can be drawn from this data. First, nuclear brinkmanship has a low success rate of 35 percent. Inferior nuclear powers have a victory rate of just 15 percent, while superior powers achieve a positive dyadic victory rate of 54 percent. Second, nuclear superiority increases the chances of victory by 39 percent. In all, this means that strategically inferior powers should rarely engage in brinkmanship due to their abysmal success rate of 15 percent, and achieving nuclear superiority greatly increases the chance of victory. This is clearly seen in Figure 3 where the victory rate of strategically superior powers exceeds that of strategically inferior powers by 19 percent. Regression analysis conducted by Kroenig assesses nuclear superiority to be a very significant predictor of success (Table 4).

Table 4: Chance of Success for Strategically Inferior and Superior Nations. Superior and Inferior Nations vs. Chance of Success per Dyad.

16 • Fall 2017


A nation with nuclear inferiority has a victory probability of 6 percent, while a superior nation has a probability of 64 percent. All other variables held constant, nuclear superiority provides for a 57 percent increase in chances of victory and has a substantive effect on nuclear crisis outcomes. Kroenig specifically examined the US-USSR crises during the Cold War and found that, when the United States had nuclear superiority over the Soviet Union, it achieved victory in 6 out of 10 crises, and its success rate increased as its superiority increased. When the United States had at least 10,000 more nuclear warheads than the Soviet Union, it won at least 5 of 6 crises and possibly all 6 depending on which historians one believes (Kroenig, 2013). After the late 1970s when the Soviet Union achieved nuclear superiority, the United States lost all 3 crises with the USSR. A final piece of interesting information to arise from Kroenig’s data set is that states are more likely to achieve victory in crises that occur closer to their own territory and that the “gravity of the crisis” and the stakes involved do not appear to influence the outcome of the crisis (Kroenig, 2013). This could be because the stakes or gravity of a crisis weigh on a nation’s decision to get involved but then do not have an impact once the crisis is underway. The final step in adquately testing the hypothesis is determining whether the challenger in each crisis is more often the victor of the crisis. Using Kroenig’s list of crises, the data set is shown in Table 5.

Table 5: Nuclear Crises, Challengers, and Victors.

Fall 2017 • 17


Making the distinction between aggressor and defender is exceedingly difficult when research on the topic of brinkmanship defines those terms so broadly. For example, in the India/Pakistan nuclear tests, did India challenge the status quo because it tested its nuclear weapons first? Or did Pakistan challenge the status quo since India was already known to be a nuclear power when Pakistan was not, and Pakistan’s response thus changed that status quo? Similarly, in the Cuban Missile Crisis did the United States first challenge the status quo by basing nuclear weapons in Turkey and undertaking the failed Bay of Pigs Invasion? Or was this state of affairs already the status quo which was then challenged by the Soviets basing nuclear missiles in Cuba? Where it is possible to discern them, aggressors and defenders of the status quo are noted, but more robust research is needed into the subject to explore the subject fully. With the undeveloped data available, however, it is possible to draw some conclusions. Of the 20 nuclear crises, 14 have a discernible aggressor. Of these 14 crises, only 2, the Afghanistan Invasion and the Berlin Wall Crisis, resulted in an aggressor victory. That is a 14 percent success rate for the aggressor.

Conclusion The available scholarship demonstrates that the hypothesis is incorrect, at least for the time being. Given the low success rate of 35 percent of all crisis dyads and the abysmal success rate of status quo aggressors of 14 percent, it seems incredibly dangerous for a nation to engage in brinkmanship as anything other than a last resort. Brinkmanship is mostly ineffective. With that said, there are some important factors to consider. First, in many of the crises examined, the defender was simply the nuclear power that intervened first, so the status quo was determined by that nation’s intervention. Intervention and willingness to get involved in crisis situations, therefore, is not inherently risky and in fact may increase the chances of success if a nation can set the status quo. However, without more research and likely a broader set of data, this cannot be determined from the research available. Second, changing technology and nuclear superiority will increase how effective brinkmanship can be for certain nations. If a nation has an effective missile defense system, its resolve and therefore chances of success increase as Powell has shown. If a nation likewise has nuclear superiority, its chances of success are much greater as Kroenig has shown. It can therefore be increasingly effective for a nation to engage in brinkmanship if it has an accurate understanding of its own capabilities and those of its adversary. Nations that lack strategic superiority should avoid brinkmanship at all costs, as the data shows that, out of all crisis dyads, the inferior nation has won only 8 percent and has a 6 percent overall chance of success. At best, a strategically inferior nation will be humiliated and return to the status quo. At worst, its recklessness will result in an accidental

18 • Fall 2017


nuclear exchange. With this in mind, the best answer to the research question appears to be that brinkmanship is somewhat ineffective as a tool of diplomacy, with the caveats that this assessment may change with technological advancement and that particular nations may find brinkmanship effective provided that they are prudent in its application. It is also critical to remember that this field of study is not nearly as welldeveloped as other policy areas. There are still a number of factors for scholars to consider in future research. Kroenig’s 2013 study is the pinnacle of available research and is also the first time a scholar has quantitatively considered not only nuclear superiority but also nuclear crisis outcomes. The fact that crucial variables are still being discovered and implemented and that the data set is so small means that much more research must be done in this field to truly establish a robust theory of nuclear crises and nuclear brinkmanship. More study must also be conducted relating to modern brinkmanship, particularly regarding non-rational actors like religious terrorists. How do the principles of resolve apply to individuals or organizations such as the Islamic State or al-Qaeda where martyrdom is the ultimate goal? If escalation does not carry any risk but is rather something to be desired, how would this affect a nuclear crisis where one actor theoretically possesses unlimited resolve? While thankfully nuclear weapons currently remain in the hands of rational actors, in the future this may not be the case. Future research should also consider the possibility that states may use nuclear brinkmanship to achieve destabilization. For example, Pakistan has been extremely eager to enter into brinkmanship situations despite its apparent nuclear inferiority. What generally qualifies as success or failure in brinkmanship situations may not necessarily apply in these situations, where the side effect of destabilization is more important than the actual capitulation of an adversary. The issue of nuclear weapons and statecraft is more relevant today than it has been since the Cold War. With the escalation and sabre-rattling of North Korea and the perpetual questions surrounding Iran’s nuclear program, policymakers must be informed about the utility and effects of nuclear brinkmanship on diplomacy. The study of nuclear crises is in dire need of more work such as Kroenig’s so that exhaustive data sets can be created, precise and authoritative definitions may be established, and relevant variables may be discovered and given due consideration before being applied to practical policy questions.

Fall 2017 • 19


Reference List Fearon, J. (1994). Domestic political audiences and the escalation of international disputes. American Political Science Review, 88(3), 577-592. Retrieved from Kapur, S. (2008). Ten years of instability in a nuclear South Asia. International Security, 33(2), 71-94. Retrieved from Kim, S. (1995). North Korea in 1994: Brinkmanship, breakdown, and breakthrough. Asian Survey, 35(1), 13-27. Retrieved from http://www.jstor. org/stable/2645127 Kroenig, M. (2013). Nuclear superiority and the balance of resolve: Explaining nuclear crisis outcomes. International Organization, 67(1), 141-171. Retrieved from Powell, R. (1987). Crisis bargaining, escalation, and MAD. American Political Science Review, 81(3), 717-736. Retrieved from stable/1962673 Powell, R. (1988). Nuclear brinkmanship with two-sided incomplete information. American Political Science Review, 82(1), 155-178. Retrieved from http:// Powell, R. (1989). Nuclear deterrence and the strategy of limited retaliation. American Political Science Review, 83(2), 503-519. Retrieved from http:// Powell, R. (2003). Nuclear deterrence theory, nuclear proliferation, and national missile defense. International Security, 27(4), 86-118. Retrieved from http:// Rauchhaus, R. (2009). Evaluating the nuclear peace hypothesis: A quantitative approach. Journal of Conflict Resolution, 53(2), 258-277. Retrieved from Trachtenberg, M. (1985). The influence of nuclear weapons in the Cuban Missile Crisis. International Security, 10(1), 137-163. Retrieved from http://www.

20 • Fall 2017


DEFENDING OUR DEFENDERS: THE SEARCH FOR LEGISLATION TO DECREASE MILITARY SEXUAL ASSAULT J. Michael Patton Abstract Sexual assault is a pervasive problem in the United States military. It is estimated that, in the vast majority of cases, perpetrators of sexual assault are not court-martialed for their offense. In the current military justice system, the commanding officer of a military unit is given discretion about whether a case should go to trial. In order to fix this allegedly biased system, Senator Kirsten Gillibrand (D-NY) introduced the Military Justice Improvement Act (MJIA) in 2013. This act would give prosecutorial discretion to an independent Judge Advocate General’s Corp (JAG) prosecutor. Much debate exists as to whether this action would more effectively combat sexual assault. This study examines sexual assault reporting rates in the Canadian military after legislation similar to the MJIA was implemented. The study also compares recent rates of reporting in the Canadian military and the United States military. Based on past Canadian precedent, the study ultimately finds no proof that the implementation of the MJIA would result in increased reports of sexual assault. ___________________________________

Fall 2017 • 21


Introduction My career was looking very promising – a “Mustang Officer” who had served in combat, qualified in two combat arms branches, and now served as a Military Intelligence Officer and Counterintelligence Special Agent and Antiterrorism Officer. I had always dreamed of having a long military career, and after spending seven years as an Enlisted Soldier, I loved taking care of troops and their families… Well, those dreams came to an end in November 2005 when a Senior Officer (Lieutenant Colonel) sexually harassed and then sexually assaulted me. At first, I thought this couldn’t be happening. I thought, I’m a professional and have served my country in combat. At that time, I was 30 years old and had been in the Army for almost 14 years so this shouldn’t be happening to me. (“Mike’s Story,” n.d., para. 1-2) This is Mike’s story. His last name has been removed to protect his identity. Mike went on to experience months of sexual and physical assault after attempting to report what had happened to him. Mike’s life was saved when the Inspector General of his base, who attempted to cover up what was occurring, sent him to a mental screening examination to dismiss him as insane. A medical doctor saw the evidence of assault and reported outside of Mike’s chain of command (“Mike’s Story,” n.d.). Unfortunately, Mike’s story is not isolated. At the beginning of 2014, the Department of Defense requested that the RAND National Defense Research Institute conduct a study on sexual assault in the military. The survey produced shocking results. In that year, approximately 116,600 active-component service members were sexually harassed. This represents 22 percent of all women and 7 percent of all men in the military (Morral et al., 2015). Currently, an estimated 75% of men and women in uniform who have been sexually assaulted do not report the crimes due to a lack of confidence in the military justice system (Department of Defense, 2016). Oftentimes, the 25% of victims who do attempt to report an assault experience retaliation from superiors. Perpetrators are rarely held responsible (“Embattled,” 2015). Many attribute these shocking statistics to a biased reporting and prosecution protocol in the military justice system. Currently, the commanding officer decides whether a sexual assault case will be prosecuted in trial (“Comprehensive Resource Center,” 2015). Since an estimated 60% of military sexual harrassments are perpetrated by individuals in the victim’s chain of command, victims rarely report the sexual assault (Morral et al., 2015). Experts who oppose the current system claim that sexual assaults would be reported more frequently if commanding officers were not given prosecutorial discretion (Christensen, 2015). In the past three years,

22 • Fall 2017


Senator Kirsten Gillibrand (D-NY) has presented a bill before Congress known as the Military Justice Improvement Act (MJIA). The MJIA seeks to place prosecutorial discretion in the hands of an independent Judge Advocate General’s Corp (JAG) prosecutor instead of the commanding officer of a unit (“Comprehensive Resource Center,” 2015). However, other experts have also concluded that reforms such as the MJIA would decrease the military’s effectiveness in combatting sexual assault (Stimson, 2014). This presents a clear research question: Does placing prosecutorial discretion in the hands of an independent military investigator increase the rate at which sexual assaults are reported? Several western nations have instituted reforms similar to the MJIA in order to address sexual assault in their militaries. Canada is one such nation (Ahmad, Buchanan, Palmer, Levush, & Feikert-Ahalt, 2013). Using Canada as an example, this study will investigate whether reforms similar to the MJIA would increase sexual assault reporting in the military. This study hypothesizes that giving prosecutorial discretion to independent attorneys will more effectively combat sexual assault by increasing reporting.

Literature Review The Military Justice Improvement Act (MJIA) has been hotly contested. It has failed to overcome a Senate filibuster three times in a row, with the third failure occurring in 2016 (Tumulty, 2016). Yet despite the Senate’s rejection of the MJIA, it has been supported by multiple experts and prestigious organizations (“Comprehensive Resource Center,” 2015). Some researchers have stated that the MJIA is not needed because the military has fixed many former problems that led to sexual assault. Stimson (2014) stated that there is no “evidentiary basis at this time supporting a conclusion that removing senior commanders as convening authorities will reduce the incidence of sexual assault or increase sexual assault reporting” (para. 52). He also added that “numerous channels [exist] outside the chain of command to report incidents of sexual assault” and that “[u]nder current law and practice, sexual assault allegations must be referred to, and investigated by, military criminal investigative organizations that are independent of the chain of command” (Stimson, 2014, para. 52). It is estimated that between 2012 and 2014, there was a 27% decrease in sexual assault (Holmes, 2015). Yet Senator Kirsten Gillibrand stated that the current system is not enough and that the rate of sexual assault has not decreased since 2012. In 2015, she estimated that there were 52 sexual assaults occurring per day in the US Military and that 75% of them were not reported (“Comprehensive Resource Center,” 2015). According to Christensen, Petersen, and Tsliker (2016), the Pentagon has claimed that the current system is successful at bringing perpetrators to justice because commanders

Fall 2017 • 23


are beginning to regularly insist that cases be prosecuted. However, they reached a different conclusion: [T]he Pentagon was unable to provide a single example of a commander “insisting” a case be prosecuted. Instead, in every case for which such information was provided, either military investigators or military attorneys were the ones to request jurisdiction over the case. Crucially, the military did not identify a single case where a commander sent a case to trial after a military prosecutor refused to prosecute. The facts behind the Pentagon’s claims reveal the great lengths they went to in order to distort the data to counter momentum and prevent reform. (Christensen, Petersen, & Tsliker, 2016, p. 2) If commanders were not prosecuting cases like the military claimed, Christensen, Petersen, and Tsliker (2016) suggested that only a reform like the MJIA would truly reduce sexual assault rates by giving military members a justice system that they can rely on. However, experts have doubts about whether or not the act would effectively reduce military sexual assault. Navy Admiral James Winnefeld, who formerly served as the vice chairman of the Joint Chiefs of Staff, argued that fewer sexual assault cases would be brought to trial if Gillibrand’s bill was enacted (Tumulty, 2016). Hanson (2015) stated that the reforms would be unsuccessful because military courts are not supposed to operate like civilian courts. While he agreed that sexual assault must be stopped, he stated that justice will be ensured when military commanders, or those who know the unit best, are allowed to decide whether a case goes to trial. He concluded that commanders are not nearly as biased as many would believe (Hanson, 2015). Stimson (2014) commented that [R]emoving authority to convene courts-martial from senior commanders will: Not reduce the incidence of sexual assault or increase reporting of sexual assaults in the armed forces, …. [n]ot increase confidence among victims of sexual assault about the fairness of the military justice system, and [n]ot reduce victim’s concerns about possible reprisals for making reports of sexual assault. (para. 51) Opponents to the MJIA have stated that the bill would actually leave more victims behind. Senator McCaskill cited 93 cases in which JAG prosecutors decided not to pursue charges, yet military commanders insisted that the cases be prosecuted

24 • Fall 2017


(“McCaskill Proposal,� 2013). Supposedly, these 93 alleged victims would have never been given justice if prosecutorial discretion was taken outside the chain of command. However, Christensen (2015) countered that, while researchers requested data and files surrounding these 93 cases, the data and files were never provided. He concluded that the research of the bill's proponents was little more than speculation. Experts disagree about whether the MJIA would improve the current system. However, since the bill has not been passed, these opinions are largely conjecture. To minimize speculation, researchers have begun to study similar legislative acts in countries that are US allies. Notably, the United Kingdom, Canada, Germany, Israel, and Australia have implemented reforms in their militaries similar to the MJIA (Ahmad et al., 2013). Both sides point to precedent in an attempt to prove the success or failure that the Act might have. To help Congress determine whether there is a precedent of successful reforms similar to the MJIA, multiple foreign officials testified before the Role of the Commander Subcommittee. The UK, Canada, and Australia have chosen to try all felonies in criminal courts, and this appears to have increased the transparency of criminal trials while allowing commanders to maintain military discipline (Joyner & Weirick, 2015). The British military saw a drastic increase in sexual assault reporting numbers after it passed a reform similar to the MJIA. In fact, only a single year after the policy was implemented, the British military saw the number of reports of female sexual assault increase by a factor of six (Ahmad et al., 2013). The more that cases are successfully reported, the more perpetrators that will be brought to justice. The Advocate General of the Canadian Armed forces also testified: The 1999 changes to the military justice system were battle tested in the theater of active operations and, in my view were a key contributor to the combat effectiveness of the Canadian armed forces. The current military justice system contributed substantially to the fielding and sustainment of a disciplined and efficient force with high morale. (Cathcart, Noonan, Cronan, & Spence, n.d., para. 1) Initial reports on precedent suggest that sexual assault is effectively combatted through reforms similar to the MJIA. However, multiple observers believe that precedent demonstrates that the Act will eventually fail. Hanson (2015) noted that when Canada and other countries passed reforms similar to the MJIA, their intent was not to reduce sexual assault. Furthermore, some evidence suggests that these reforms only reduced justice. Stimson (2013) observed:

Fall 2017 • 25


Some proponents of the removal of command authority have identified as “success” stories similar policies in Canada, New Zealand, Australia, and the United Kingdom and urge the United States to follow suit. But these countries’ removal of prosecutions from the chain of command can hardly be touted as a success for victims. In fact, most of our allies reported that removing the authority to prosecute from the chain of command has slowed prosecutions, and they saw no increase in the number of convictions under the new system. (para. 9) Furthermore, the Role of the Commander Subcommittee concluded that there was no evidence for change in the levels of reporting for sexual assault cases (Jones, 2013). Much research has investigated increases or decreases in prosecution rates, reporting, and investigation of sexual assault cases. However, until this point, most research has been conducted by experts gathering data to lobby for one side or another. This study works to objectively analyze the effectiveness of MJIA-type legislation by examining records formed outside of any consideration for the MJIA and its passage.

Data and Methods This study performs secondary qualitative and quantitative analysis on legislation similar to the Military Justice Improvement Act to determine if the MJIA would increase reporting rates for sexual assault cases. The study examines precedent from similar action taken by the Canadian military in the form of a case study. This study also compares the reporting rates of sexual assault in the United States and Canadian militaries to determine if sexual assault reporting rates increased due to the implementation of MJIA type legislation. The Canadian military was selected as a case study for several reasons. First, Canada implemented reforms similar to the MJIA relatively recently (Ahmad et al., 2013). This will lead to more reliable application of the study than would exist if a case study was conducted on nations such as Israel, which implemented legislation similar to the MJIA in 1955 (Ahmad et al., 2013). Second, reforms similar to the MJIA are still in place in Canada. This will lead to a more accurate application of the case studies than instances such as Australia, which abolished its respective legislative act similar to the MJIA in a 2009 Australian Supreme Court case (Ahmad et al., 2013). Third, the Canadian version of the MJIA fits the prescribed definition for a bill similar to the MJIA. This makes it more applicable to the American system than nations such as Great Britain, which gave prosecutorial discretion for sexual

26 • Fall 2017


assault cases to civilian attorneys and not just military attorneys (Ahmad et al., 2013). The independent variable is the implementation of reforms similar to the MJIA. Based on his own testimony, the Advocate General of the Canadian Armed Forces appears to consider 1999 to be the first time that the Canadian military officially transitioned to an MJIA-type system (Cathcart et al., n.d.). For the purposes of this study, 1999 will be considered the year that legislation similar to the MJIA was implemented and enforced. The dependent variable in this study is the rate of reporting of sexual assaults. The more sexual assaults that are reported, the more likely it is that perpetrators will be brought to justice. When analyzing Canadian precedent, the amount of sexual assault reporting will be represented as a rate that compares the amount of reported sexual assaults with the overall size of the military during the year that the assaults were reported. The rate will be defined in units of number of reported assaults per 1,000 soldiers. Analyzing reporting rates in light of the size of the military will lead to a more fair analysis. For example, sexual assault reporting could increase by two cases, yet, if the military doubled in size, it would be inaccurate to say that reporting increased relative to previous years. As a result, a rate system will be more effective at analyzing reporting. If the success of legislation similar to the MJIA is judged based on an increase of reporting, it could be argued that the study is inaccurately measuring success. An increase in reporting could be due to an increase in sexual assault rates, rather than effective legislation. In order to account for this intervening variable, this study also examines overall sexual assault rates in the Canadian military. By examining the overall rate in comparison to the reporting rate, the intervening variable should be accounted for and the results should be valid. This study also compares the modern Canadian military and the United States military to determine if sexual assault reporting rates are worse or better in Canada due to the bill. This study analyzes Canada’s sexual assault rates from the years 1996-1998 and 2013-2015. Analyzing the reporting rates three years prior to the implementation of reforms provides a pre-test. Furthermore, analyzing data from 2013-2015 serves as a post-test and gives an idea of the long term effects of the bill.

Research Canada: Background Canada spent years implementing legislation similar to the MJIA. The process included a case that went before the Canadian Supreme Court as well as the passage of multiple laws.

Fall 2017 • 27


Steps to develop their system began in the 1990s in R. v. Généreux, which was a case that went before the Canadian Supreme Court. The defendant, Généreux, was tried before a military court and was convicted of possession of illegal drugs, trafficking of illegal drugs, and being absent without leave (R. v. Généreux, 1992). Généreux was sentenced to fifteen months of prison and was dishonorably discharged from the Canadian army. After appealing his case through the Canadian federal courts, the case reached the Supreme Court. The Canadian Supreme Court made multiple changes to the procedures of Canadian Military Courts. Among these changes were multiple invalidations of prior statutes that outlined a commander’s role in military justice (Hanson, 2013). As a result of this ruling, Canada rewrote many portions of its code related to military justice. Hanson (2013) summarized the most important portion of this rewriting: Changes following Généreux altered the traditional role of the military commander so that commanders could no longer conduct a summary action on a case which they have personally investigated. While a commander still has the authority to bring charges, the military police also has independent authority to investigate serious and sensitive cases, and it too can bring charges independent of the military commander. (p. 237) R. v. Généreux marked the first time that the Canadian military justice system allowed individuals outside the chain of command to have prosecutorial discretion over select cases. Commanders still had authority to proceed with a case, but they no longer had authority to dismiss a case if they had investigated it and military police believed it needed to be prosecuted (Hanson, 2013). Bill C-25, which was implemented in December of 1998, further reduced the role of commanders in the military justice system and “institutionally separate[d] the functions and responsibilities of the main actors in the military justice system” (Ahmad et al., 2013, p. 21). The bill solidified multiple regulations relating to how cases are prosecuted in the Canadian military justice system (“Bill C-25,” 1998). As previously mentioned, many consider 1999 to be the first year that sexual assault cases were handled in a way similar to the proposed reforms of the MJIA (Cathcart, n.d.). Finally, it is important to note that these reforms were not made out of an intention to prevent sexual assault in the Canadian military. Rather, they were related to broader criminal offenses in the Canadian military. Since sexual assault is classified as a criminal offense, these legislative actions and judicial rulings affected sexual assault cases (Hanson, 2015).

28 • Fall 2017


Canada: Statistics Before and After Implementation of Legislation Similar to the MJIA The following information logs the way that the statistics in this study were gathered and computed. This section also documents the statistics themselves. Statistics were primarily computed by analyzing yearly records of sexual assault reports. This was done to ensure objectivity. Since the data was not initially collected with the MJIA in mind, there is no possible way for this data to be skewed due to bias related to the MJIA. First, rates of reporting for military sexual assault from 1996-1998 were calculated. These rates represent the time period before reforms similar to the MJIA were fully implemented in the Canadian military. In 1996, there were 145 reported sexual assaults, 140 sexual assaults in 1997, and 200 assaults in 1998 (Canadian Forces Provost Marshal, 2000). The Canadian military had roughly 97,000 troops in 1996, 98,000 in 1997, and 96,000 in 1998 (Park, 2008). This would lead to a sexual assault reporting rate of 1.495 reports per 1,000 troops in 1996, 1.429 reports per 1,000 troops in 1997, and 2.083 reports per 1,000 troops in 1998. Next, the rates of reported sexual assaults were calculated for the post-reform environment in Canadian military justice. In 2013, there were 72 cases of reported sexual assault, in 2014, there were 101 cases of reported sexual assault, and finally, in 2015, there were 130 cases of reported sexual assault (Canadian Forces Provost Marshal, 2016). During this period, the Canadian military had 92,209 troops in 2013, 90,973 troops in 2014, and 87,837 troops in 2015 (Department of National Defence and the Canadian Armed Forces, 2013; Department of National Defence, 2014; Department of National Defence and the Canadian Armed Forces, 2015). This leads to a sexual assault reporting rate of 0.781 reports per 1,000 troops in 2013, 1.110 reports per 1,000 troops in 2014, and 1.480 reports per 1,000 troops in 2015. The rate of sexual assault reporting from these two different eras in Canadian military justice is represented in Graph A on the following page. The graph demonstrates that the rate of reporting for sexual assault was actually higher prior to the implementation MJIA-type legislation. While the rate of sexual assault reporting has been increasing in recent years, the rate is still generally lower than the rate seen prior to the passage of legislation similar to the MJIA. Yet these are the initial reports. However, as noted earlier, the intervening variable of an decrease in the rate of sexual assault must be accounted for. In order to truly conclude that legislation similar to the MJIA is not effective at increasing reporting rates of sexual assault, it must be proved that sexual assault reporting was not occurring at a lower rate simply because there were fewer sexual assaults.

Fall 2017 • 29


Graph A: Rate of Sexual Assault Reports Per 1,000 Troops Accounting For the Intervening Variable In order to account for the intervening variable of an increase in sexual assault rates, data was gathered about the overall rate of sexual assault in the Canadian military. Recent research on the rate of sexual assault demonstrates that it is an extreme problem. In the current Canadian military, it is estimated that 27.3% of women and 3.8% of men have been sexually assaulted at some point in time during their military career (Cotter, 2016). Cotter (2016) further commented, “Four in five (79%) members of the Regular Force saw, heard, or were personally targeted by sexualized behaviour in the military workplace or involving other military members, Department of National Defence employees, or contractors, within the past 12 months” (Highlights section 2, para. 1). Furthermore, in a single year, 31% of women and 15% of men reported being targeted by sexualized or discriminatory behavior (Cotter, 2016). Unfortunately, sexual assault in the Canadian military is a topic that has only been recently researched. Prior to Cotter’s 2016 report, almost no statistics existed regarding the overall rate of sexual assault in the Canadian military. As a result, it is difficult to verify that the slight decrease in reporting was due to a lower rate of sexual assault. However, given the reasons why the report was initiated in the first place, it appears unlikely that sexual assault was worse prior to the implementation of legislation similar to the MJIA. Cotter’s 2016 report was commissioned because there had been recent concerns expressed by military members that sexual assault was becoming a rampant problem in the Canadian military. Ultimately, the Canadian government was forced to take action following a 2015 qualitative study

30 • Fall 2017


that was conducted by Marie Deschamps. A 2015 report found that the military culture was hostile to women and LGBTQ individuals and engendered serious cases of sexual assault (Austen, 2016). Despite the passage of legislation similar to the MJIA, it became clear that sexual assault was incredibly common. Deschamps (2015) documented: One of the key findings of the External Review Authority (the ERA) is that there is an underlying sexualized culture in the CAF that is hostile to women and LGTBQ members, and conducive to more serious incidents of sexual harassment and assault...The ERA found a disjunction, however, between the high professional standards established by the CAF’s policies on inappropriate sexual conduct, including sexual assault and sexual harassment, and the reality experienced by many members day-to-day...Some participants further reported instances of sexual assault, including instances of dubious relationships between lower rank women and higher rank men, and date rape. At the most serious extreme, these reports of sexual violence highlighted the use of sex to enforce power relationships and to punish and ostracize a member of a unit. (para. 2) While there are no statistics to compare current rates of sexual assault with the rates of assault before MJIA-type legislation was passed, it is important to note that no similar studies were commissioned by the Canadian government until 2016. Furthermore, if sexual assault has indeed become worse since the 90s, the decrease in reporting rates would not be due to the supposed intervening variable. However, even if sexual assault has indeed decreased and was poorly documented in the 90s, the amount of reporting related to sexual assaults today is still small. Cotter (2016) estimated that only 23% of sexual assaults that occur in the Canadian military are reported in any form. Additionally, based on his survey, Cotter also estimated that only 7% of Canadian soldiers who were sexually assaulted reported the assault to individuals in the Canadian military justice system. Despite the implementation of MJIA-type legislation, sexual assault remains incredibly underreported. In the event that today’s sexual assault epidemic is not as bad as it was in the 1990s, the improvement could not have occurred due to reforms similar to the MJIA. A Comparison of the Canadian Military and the United States Military Even with MJIA-type legislation, the culture related to sexual assault is worse in the Canadian military than in the United States military. Previously, this study determined that the primary way to measure the success of MJIA type legislation

Fall 2017 • 31


is reporting rates. Legislation similar to the MJIA works to achieve justice in more cases by increasing reporting of sexual assault. The Department of Defense (2016) estimated that 75% of men and women in the US military who are sexually assaulted choose not to report the assault. Similarly, in the Canadian military, 77% of sexual assaults are not reported (Cotter, 2016). Many soldiers in the United States military have expressed that they choose not to report the assault out of fear that they will receive repercussions (“Comprehensive Resource Center,� 2015). Surprisingly, even though the Canadian military has implemented legislation similar to the MJIA, many Canadian soldiers who are sexually assaulted express fear that they will receive repercussions as well. Deschamps (2015) wrote: It was readily apparent throughout the consultations that a large percentage of incidents of sexual harassment and sexual assault are not reported. First and foremost, interviewees stated that fear of negative repercussions for career progression, including being removed from the unit, is one of the most important reasons why members do not report such incidents. Victims expressed concern about not being believed, being stigmatized as weak, labeled as a trouble-maker, subjected to retaliation by peers and supervisors, or diagnosed as unfit for work. There is also a strong perception that the complaint process lacks confidentiality. Underlying all of these concerns is a deep mistrust that the chain of command will take such complaints seriously. (para. 12) Even with the implementation of legislation similar to the MJIA, victims of sexual assault still do not report for fear of negative repercussions and lack of confidence that the Canadian chain of command will do anything to fix the problem. These concerns still exist despite the fact that prosecutors in the Canadian military have full ability to determine whether or not a sexual assault case goes to trial. Furthermore, the Canadian military contains higher rates of sexual harassment than the United States military. In the US, it is estimated that roughly 22% of women and 7% of men in the military were sexually harassed in a single year (Morral et al., 2015). In the Canadian military, 31% of women and 15% of men reported being targeted by sexualized or discriminatory behavior on the basis of their gender or sexual orientation (Cotter, 2016). Additionally, overall rates of sexual assault are similar in both militaries even though one has implemented MJIA-type legislation and the other has not. In a single year in the United States military, an estimated 1.0% of men and 4.9% of women were sexually assaulted. Likewise, in the Canadian military, an estimated 4.8% of women and 1.2% of men were sexually assaulted in a single year. Graph B provides a

32 • Fall 2017


comparison of these statistics. For reference, SH refers to “Sexual Harassment” while SA refers to “Sexual Assault.”

Graph B: Sexual Assault in United States and Canadian Militaries Even though Canada has implemented legislation similar to the MJIA, its rates of sexual assault and related problems are similar to the United States. It cannot be concluded that sexual assault rates and reporting rates are are better in the Canadian military than in the United States military.

Conclusion The fact that sexual assault is rampant in the United States military is morally sickening and tactically dangerous for the strength of our military. Sexual assault must be addressed if we want our troops to successfully defend this nation. However, based on the research conducted, it has become clear that the hypothesis was not supported by the case study of the Canadian military. Despite the probable increase of sexual assault in the Canadian military, reporting rates with regard to sexual assault have actually decreased. If reporting rates decreased, then the MJIA-type legislation that Canada implemented failed in relation to sexual assault. Furthermore, upon comparing the Canadian military and the United States military, it became clear that issues with reporting were prominent regardless of who was given prosecutorial discretion. Canadian military members were still afraid to report due to backlash, lack of confidence in the system, and lack of confidence in the chain of command. This suggests that the remedy to the current problem of sexual assault cannot simply come from transferring prosecutorial discretion to a

Fall 2017 • 33


JAG prosecutor. Additionally, more research needs to be conducted on the topic. In no way does this study claim to be completely comprehensive or the final say on the MJIA. However, at least with the country of Canada, MJIA-style reforms cannot be touted as a successful method to increase reporting rates. To be sure, the military needs to find more effective ways to combat its rampant sexual assault problem. Victims need justice. If incidents like Mike’s story continue in the United States military, our troops will not be able to operate effectively as a unit. Deschamps (2015) made two policy recommendations in her report for the Canadian military that could be applicable to the United States military as well: First, cultural change is key. Without broad-scale cultural reform, policy change is unlikely to be effective. This requires the CAF to address not only more serious incidents of sexual harassment and assault, but also low-level sexual harassment, such as the use of sexualized and demeaning language, which contributes to an environment that is hostile to women and LGTBQ members. Second, strong leadership drives reform. The deep, genuine, and concrete commitment of senior leaders is essential to developing programs that will meaningfully impact the organization, as well as to convey a clear message to CAF members that inappropriate sexual conduct will not be tolerated, and to rebuild trust between CAF members and senior leadership. (para. 34) While United States military leadership has implemented reforms such as these in order to prevent sexual assault, these policy recommendations must continue to be implemented and expanded. United States military members put their lives on the line daily in order to keep this nation free. If a young person joins the military, the last thing they should have to worry about is sexual harassment or assault. Given that the MJIA has the potential to be ineffective, researchers and the United States government ought to continue to search for realistic but effective solutions in order to prevent sexual assault from occurring.

34 • Fall 2017


Reference List Ahmad, T., Buchanan, K., Palmer, E., Levush, R., & Feikert-Ahalt, C. (2013). Military justice: Adjudication of sexual offenses. Law Library of Congress. Retrieved from Comment_Unrelated/01-Jul-13/ZZ_B_AmyZiering_ReportForCongress_ MJ_AdjudSexOffenses_201307.pdf Austen, I. (2016, November 28). Women in the Canadian military report widespread sexual assault. New York Times. Retrieved from https://www. Bill C-25: An Act to amend the National Defence Act and to make consequential amendments to other Acts. (1998). 1st Reading Dec. 4, 1997, 36th Parliament, 1st Session. Retrieved from LegislativeSummaries/bills_ls.asp?ls=C25&Parl=36&Ses=1 Canadian Forces Provost Marshal. (2000). Canadian Forces Provost Marshal report: Annual report 2000. Retrieved from collections/Collection/D3-13-2000E.pdf Canadian Forces Provost Marshal. (2016). Canadian Forces Provost Marshal report - Fiscal year 2015-2016. Retrieved from about-reports-pubs-cfpm-annual-reports/ Cathcart, B., Noonan, S., Cronan, P., & Spence, A. (n.d.). Allied force commanders testified that the MJIA will not disrupt “good order and discipline.” Retrieved from Christensen, D. (2015, June 6). The Military Justice Improvement Act ensures justice, despite what its critics say. Huffington Post. Retrieved from Christensen, D., Petersen, M., & Tsliker, Y. (2016). Debunked: Fact checking the Pentagon’s claims regarding military justice. Protect Our Defenders. Retrieved from POD_Debunked_Report.pdf

Fall 2017 • 35


Comprehensive resource center for the Military Justice Improvement Act. (2015). Retrieved from Cotter, A. (2016). Sexual misconduct in the Canadian armed forces 2016. Statistics Canada. Retrieved from Department of Defense. (2016). Annual report on sexual assault in the military fiscal year 2015. Retrieved from Annual/FY15_Annual_Report_on_Sexual_Assault_in_the_Military.pdf Department of Defense. (2015). Department of Defense annual report on sexual assault in the military. Retrieved from reports/FY14_Annual/DoD_FY14_Annual_Report_on_Sexual_Assault_in_ the_Military.pdf Department of National Defence and the Canadian Armed Forces. (2013). Departmental performance report 2012-2013. Retrieved from http://www. Department of National Defence. (2014). Departmental performance report 2013-14. Retrieved from Department of National Defence and the Canadian Armed Forces. (2015). 201415 departmental performance report. Retrieved from assets/FORCES_Internet/docs/en/dnd-dpr-2014-2015.pdf Deschamps, M. (2015). External review into sexual misconduct and sexual harassment in the Canadian Armed Forces. National Defence and the Canadian Armed Forces. Retrieved from page Embattled: Retaliation against sexual assault survivors in the US military. (2015). Human Rights Watch. Retrieved from report/2015/05/18/embattled/retaliation-against-sexual-assault-survivorsus-military

36 • Fall 2017


Hanson, V. (2013). The impact of military justice reforms on the law of armed conflict: How to avoid unintended consequences. Michigan State International Law Review, 21(2), 229-272. Retrieved from http:// Hanson, V. (2015). Introduction to Discipline, justice, and command in the U.S. military: Maximizing strengths and minimizing weaknesses in a special society. New England Law Review, 50(1), 13-19. Holmes, S. (2015, May 1). Sharp decrease of sexual assault in study, military finds. CNN. Retrieved from military-sexual-assault-report/ Jones, B. (2013). Review of allied military justice systems and reporting trends for sexual assault crimes [Memorandum]. Retrieved from http://www.sapr. mil/public/docs/research/RSP%20Finding%20-%20Initial_Assessment_ ROC_20131106.pdf Joyner, J., & Weirick, J. W. (2015, October 7). Sexual assault in the military and the unlawful command influence catch-22. War on the Rocks. Retrieved from McCaskill proposal. (2013). Response Systems to Adult Sexual Assault Crimes Panel. Retrieved from meetings/Sub_Committee/20131023_ROC/05_RoC_McCaskill_Proposal.pdf Mike’s Story. (n.d.). Protect Our Defenders. Retrieved from http://www. Morral, A., Gore, K., Schell, T., Bicksler, B., Farris, C., Dastidar, M., . . . Williams, K. (2015). Sexual assault and sexual harassment in the U.S. military. RAND. Retrieved from Nance, P., & Barno, A. (2013, December 3). Sexual assault and the chain of command. The Hill. Retrieved from judicial/191775-sexual-assault-and-chain-of-command Park, J. (2008). A profile of the Canadian Forces. Statistics Canada. Retrieved from

Fall 2017 • 37


R. v. Généreux. (1992). Supreme Court of Canada. Retrieved from https://scc-csc. Stimson, C. (2013). Sexual assault in the military: Understanding the problem and how to fix it. Heritage Foundation. Retrieved from http://www.heritage. org/defense/report/sexual-assault-the-military-understanding-the-problemand-how-fix-it Stimson, C. (2014). Military sexual assault reform: Real change takes time. Heritage Foundation. Retrieved from report/military-sexual-assault-reform-real-change-takes-time?_ga=1.539583 76.1792376637.1489954829 Tumulty, B. (2016, June 14). Senate vote blocked on Gillibrand’s military sexual assault proposal. Lohud. Retrieved from news/2016/06/14/senate-vote-blocked-gillibrands-military-sex-assaultproposal/85869282/

38 • Fall 2017


INSTITUTIONS AND ENVIRONMENT: THE ENVIRONMENTAL KUZNETS CURVE, PROSPERITY, AND CIVIL INSTITUTIONS Johanna Christophel Abstract The Environmental Kuznets Curve (EKC), first hypothesized in the early 1990s, theorizes that as a developing nation’s GDP increases, pollution will peak and then fall. Essentially, domestic wealth enables the development of environmental protection. However, as China’s GDP surges ever higher and the nation solidifies its place as a global economic powerhouse, its environmental practices continue to worsen. With this anomalous data point, China’s lack of internal environmentalism demands a nuanced approach to the EKC. This study proposes and tests the hypothesis that good environmental policy requires a combination of economic prosperity, economic freedom, and stable, free civic institutions. This hypothesis is primarily tested through comparative analysis of China and the United States to determine whether economic prosperity is the sole and exclusive determinant of environmental protection. The disparity between Chinese and American air quality indicates that economic and civic freedoms may be necessary elements to environmental protection policies. However, similar trends between American and Chinese water quality offer researchers an additional data point. ___________________________________

Fall 2017 • 39


Introduction In 2010, the United Nations Environment Programme issued a report that warned that current environmental degradation could lead to an eighteen percent reduction in overall global economic output by the year 2050 (“Universal Ownership,” 2010). Current global environmental policy appears to be insufficient in light of such fateful predictions. With this in mind, it is imperative that policymakers begin to understand what conditions are necessary to foster the development of environmentally-sustainable government policies and business practices. What can be done to achieve environmental sustainability within both developed and developing nations? How can bottom-up environmental concern be triggered at the grassroots level rather than relying on states and lawmakers to push top-down environmental policies? In the early 1990s, Gene Grossman and Alan Kreuger (1995) proposed the Environmental Kuznets Curve (EKC) theory as a new application of the original Kuznets Curve theory. The original Kuznets Curve theory, created in the midtwentieth century, hypothesized that as a nation’s overall per capita income increases, income inequality gradually shrinks. In a similar manner, Grossman and Kreuger speculated that environmental protection policies are directly linked to industrialization. As a nation’s per capita income increases to $8,000, the overall frequency of both air and water pollutants decreases. The EKC has proven to be an accurate descriptive theory for many western countries post-industrialization. However, today, policymakers, economists, and environmentalists face an anomalous data point: the economic rise of the People’s Republic of China. When adjusted for inflation, China has far surpassed the per capita income level that Grossman and Kreuger originally hypothesized would trigger environmental protection policies. Nonetheless, China has failed to institute substantial environmental protection laws or practices. Meanwhile, life-threatening smog continues to envelop Beijing, and China’s polluted rivers are drying up (Berlinger, George, & Wang, 2017). This reality demands the question: Are there factors beyond mere industrial capacity and economic prosperity that are necessary for the development of national environmental sustainability? This study seeks to answer the question: Does the Environmental Kuznets Curve hypothesis work properly in countries with relatively low economic and civic freedom? This study hypothesizes that economic prosperity is a necessary but insufficient predictor of environmental policies. Further, this study tests the hypothesis that two additional variables—civic and economic freedom— are necessary trigger mechanisms for environmental protection policies. If the hypothesis is correct, environmental protection policy should begin somewhere at the intersection of economic prosperity, economic freedom, and civic liberty.

40 • Fall 2017


Likewise, if the hypothesis is true, wealthy countries with relatively low economic and civic freedom should not reflect the EKC’s predictions, while wealthy countries with relatively high economic and civic freedom should follow the EKC. This will be the case in spite of both countries having met requisite per capita income standards. In order to evaluate the hypothesis, this study examines the case studies of China and the United States. It considers various metrics for evaluating relative economic and civic freedoms in the United States and China. Further, it analyzes secondary data from longitudinal studies of the air and water quality of the two nations. This longitudinal data enables an evaluation of the increase or decrease in environmental quality, relative to changes in economic prosperity.

Literature Review Much ink has been spilled over the relationship between industrialization and environmental preservation. Grossman and Krueger (1995) proposed the Environmental Kuznets Curve (EKC), which indicated that a country’s economic industrialization causes a highly-polluting phase, which is followed by a reduction in environmental degradation. As per capita income approaches $8,000 in 1995 dollars, air pollution levels will gradually fall. As Harbaugh, Levinson, and Wilson (2002) noted, the EKC has two important policy implications. First, it predicts that developing countries will automatically become cleaner as their economies grow. Second, it asserts that it is natural for countries in the developing world to become more polluted before they improve. Environmental protection should follow an inverted bell curve: Environmental degradation spikes as national wealth increases, and environmental degradation decreases after a country has achieved an annual per capita income of $8,000 in 1995 dollars (Harbaugh et al., 2002). In contrast, Anderson and Cavendish (2001) discovered that there may be a two-way incentive structure driving the EKC. While rising per capita GDP causes citizens to shift from a survival mentality to a quality-of-life mentality, it is environmental regulations that stimulate technical innovation that enables the reduction of pollution. Essentially, regulations are sometimes necessary prerequisites to the technology needed to improve air and water quality. Anderson and Cavendish further noted that technological innovations created by the developed world may now enable developing countries to reduce pollution at an earlier point in development than states before them. Existing technology may allow the EKC to be shifted to the left as national environments begin to improve at a lower per capita GDP than Grossman and Kreuger believed was necessary. While China reports annual GDP increases of 8% to 9%, the World Bank (n.d.) found that China is nonetheless the world’s largest source of sulfur dioxide (SO2), a

Fall 2017 • 41


toxic air pollutant. Due to China’s one-party system, Zheng and Kahn (2013) found a number of factors that play a determinative role in Chinese environmental policy. They hypothesized that middle class demand for quality of life is a more accurate predictive variable of the rise of environmentally-protective policies than per capita GDP. Zheng and Kahn’s study discovered that the Chinese electorate must demand information transparency and sustainability from local politicians before China will improve environmentally. Zheng and Kahn’s findings corroborated a previous study by Liu (2008) that demonstrated that Chinese eco-communities enabled environmental improvement to coexist with economic growth in early stages of industrial development. These targeted eco-communities are concentrated in Eastern China but are specifically government-sponsored. Liu, Zheng, and Khan’s recent studies fulfilled predictions by Dasgupta, Laplante, Wang, and Wheeler (2002) that institutional capacity is a more determinative indicator of environmental quality than economic benefit is. Such findings have also proven true in the United States. As List and Sturm (2006) discovered, single-issue voters in the United States have made electoral incentives one of the most important determinants of environmental policy.

Data and Methods This study employs the definition of economic prosperity used in the original Environmental Kuznets Curve study. Grossman and Krueger (1995) found that environmental pollutants generally reached a turning point prior to a per capita income of $8,000. Adjusted for inflation to 2017 dollars, societies should enact substantive environmental policies that significantly improve environmental quality before reaching an annual per capita income of $12,599 (“CPI Inflation Calculator,” n.d.). For ease of use, the figure used as a metric of economic prosperity in this study is rounded to $12,600. Annual per capita income is measured, compared, and contextualized through the window of Gross Domestic Product Purchasing Power Parity (PPP). PPP offers a better assessment of overall quality of life than a mere comparison of raw annual per capita GDP provides. PPP measures per capita GDP based on the rate at which the currency of one country would have to be converted into that of another country to buy the same amount of goods and services (Callen, n.d.). This statistical methodology allows researchers to have better insight into the comparative day-today lives of individuals in two or more separate nations. Given the EKC’s proposed relationship between individual prosperity and environmental quality, this measurement is necessary to more precisely evaluate quality of life on an individual scale. Longitudinal PPP data is drawn from the World Bank’s yearly analysis and contextualization of global per capita income.

42 • Fall 2017


This study utilizes the Fraser Institute’s definition of economic freedom, drawn from its Economic Freedom of the World Index. Economic freedom exists when markets are coordinated by personal choice, voluntary exchange, and clearlydefined and enforced property rights (Gwartney, Lawson, & Hall, 2016). This study draws from Gwartney, Lawson, and Hall’s surveys to provide statistics for relative economic freedom in China and the United States. Likewise, this study measures stable, free civic institutions by considering data generated by the annual Freedom House Freedom in the World Index. Information about the relative health and freedom of civic institutions is taken from Freedom House’s 2017 Freedom in the World Index. Taken in combination, the relative strength of each of these factors will demonstrate the stability and freedom of political institutions in China and the United States. When considering the development of environmental policy in China and the United States, this study analyzes air and water quality data over time. Grossman and Kreuger (1995) employed two metrics in their original EKC study: air quality and water quality. For the purposes of this study, air quality is measured by employing a wide variety of longitudinal secondary data compiled by World Bank environmental quality assessment studies. Air quality variables include the relative presence of PM2.5, an industrially-generated air pollutant, and carbon dioxide emissions. Up-to-date, longitudinal ground, surface, and drinking water quality data is readily available for the United States from a variety of domestic and international sources. This study employs longitudinal data compiled by Lindsey and Rupert (2012) about groundwater quality generated by the United States Geological Survey from 1988 to 2010. This data provides information about chloride, dissolved solids, and nitrates in American groundwater. Unfortunately, similar data is not available from China. China’s state environmental agencies have released virtually no water quality data since the 1990s (Hsu, Yan, & Cheng, 2017). Perhaps due to overall environmental degradation, the Chinese government has kept all forms of water quality data from the public, and researchers are left with massive gaps in statistics. The World Bank maintains a limited amount of Chinese river water quality data from the 1990s, and this dataset is employed in this paper as a possible indicator of trends leading into the 2000s. However, it is by no means intended to be indicative of current realities in the country. American data is provided from an overlapping time period, in order to allow some amount of contextualized comparison.

Research Each year, the World Bank releases contextualized per capita GDP statistics for all of the countries of the world. In the most recent data cycle, the World Bank

Fall 2017 • 43


estimated that the PPP of China was $14,450.17 per capita (“GDP Per Capita,” n.d.). In substantial contrast, the United States’ estimated PPP was $56,115.72 in the same timeframe (“GDP Per Capita,” n.d.). Based on World Bank data, between 2012 and 2013 China surpassed the EKC-predicted benchmark for national prosperity, as the per capita PPP increased from $11,351.06 in 2012 to $12,367.97 in 2013. The longitudinal trends of the per capita PPP of both the United States and China from 1990 to 2015 are reflected in Figure 1, and all figures are adjusted for inflation to 2017 dollars.

Figure 1: Longitudinal Purchasing Power Parity Each year, the Fraser Institute releases its annual economic freedom rankings. Each of the 159 countries in the world is evaluated on a number of metrics and assigned points relative to the quality of each variable, with zero being the worst and ten being the best. The Fraser Institute’s report is based on the assumption that economic freedom can be judged through five metrics. The first is the size of government, analyzed by considering expenditures, taxes, and enterprises. The second is legal structures and the security of property rights. The third is access to sound money. The fourth is freedom to trade internationally. The final variable is the regulation of credit, labor, and business (Gwartney et al., 2016). These data points are combined in an attempt to create a holistic picture of the economic realities of each country. Consideration of each of the variables is important, as the presence of one or two factors does not necessarily indicate that citizens of a country truly enjoy economic freedom. According to the Fraser Institute report conducted by Gwartney et al. (2016), on the first metric, size of government, China’s most recent ranking was a 5.1,

44 • Fall 2017


placing it at 134th on the global scale. In contrast, the United States received a 6.4, ranking 78th globally. On the second metric, legal structures, China received a 5.8, or 65th globally, in comparison to the United States’ score of 7.1, for a rank of 27th. For the third metric, access to sound money, China received an 8.2, for the 92nd rank globally. The United States scored a 9.4 on this metric, ranking 40th globally. On the fourth metric, freedom to trade internationally, China received a score of 6.8, ranking 95th globally. Comparatively, the United States received a score of 7.6, for a rank of 60th. China’s score on the final metric, regulation of credit, labor, and business, was 6.3, for a rank of 131st. The United States, in contrast, received a score of 8.3, for a rank of 8th globally. These relative score values are reflected in Figure 2. China received an overall score of 6.45, coming in at 113th globally. The United States received a total score of 7.75, for a rank of 16th around the world (Gwartney et al., 2016).

Figure 2: Fraser Index of Economic Freedom Each year, the Freedom House analyzes twenty-five data points to gauge the relative freedom of every nation on the globe. Reported in the Freedom Index of the World, these data points are divided into ten political rights indicators and fifteen civil liberties indicators (Puddington & Roylance, 2017). The political rights indicators fall into three subcategories: Electoral Process, Political Pluralism and Participation, and Functioning of Government. Likewise, the civil liberties indicators are divided into four subcategories: Freedom of Expression and Belief, Associational and Organizational Rights, Rule of Law, and Personal Autonomy and Individual Rights. Variables are measured on a scale of zero to four, with zero being the lowest degree of freedom and four the highest degree

Fall 2017 • 45


(Puddington & Roylance, 2017). Like the Fraser economic freedom indicators, each of the variables within the Freedom Index of the World must be considered to truly understand the political realities of a given country. Based on Freedom House’s data, each country and territory is given two numerical ratings between one and seven. The first rating is for political rights, and the second rating is for civil liberties. Within Freedom House’s study, a rating of one is the highest degree of freedom, while a rating of seven is the lowest degree. An additional aggregate rating is then generated from these numbers (Puddington & Roylance, 2017). The Freedom House survey gathers data by asking a number of complicated, multi-part questions. These include, but are not limited to, inquiries about the status of free and fair elections, the relative fairness of electoral laws, and the freedom of political association. Within the civil liberties category, Freedom House researchers ask about media censorship, public religious expression, and political indoctrination in the classroom (Puddington & Roylance, 2017). According to the Freedom Index of the World produced by Puddington and Roylance (2017), China received a rating of seven for political rights and six for civil liberties, for a total ranking of fifteen in 2017. This ranking has worsened over time. In 2016, China received an aggregate ranking of sixteen. In contrast, the United States received a score of one for political rights and one for civil liberties, for a total ranking of ninety. Within the total rankings, zero is the least free, and one-hundred is the most free. Based on these Freedom House statistics, the relative freedom of individuals residing in China is substantially lower than the freedom of comparable individuals living in the United States (Puddington & Roylance, 2017). In assessing environmental quality, the World Bank conducts routine evaluations of the air qualities of countries around the globe. One of the metrics employed is PM2.5, atmospheric particulate matter smaller than 2.5 micrometers in diameter. These tiny particles are suspended in air and can easily become entrapped within human lungs, causing significant health problems. Also called “fine particles,” production of these airborne substances is linked to industrialization. PM2.5 is generated by all kinds of combustion, including motor vehicles, power plants, residential wood burning, forest fires, agricultural burning, and certain industrial processes (“Particle Pollution,” n.d.). Based on World Bank data, the average Chinese resident’s mean annual exposure to PM2.5 concentration increased by 6.6 micrograms per cubic meter from 2000 to 2005 (“DataBank,” n.d.). In contrast, the average American resident’s mean annual exposure to PM2.5 concentration decreased by 2.2 micrograms per cubic meter over the same period (“DataBank,” n.d.). The overall trend is reflected in Figure 3.

46 • Fall 2017


Figure 3: Longitudinal Air Quality For context, the World Bank also analyzes overall national air quality in relationship to the World Health Organization’s guideline values for acceptable exposure to PM2.5. Given PM2.5’s ability to penetrate human lungs, exposure to PM2.5 in relatively high concentrations has significant implications for both environmental quality and public health. Of the variables considered in this study, PM2.5 concentration has perhaps the greatest direct impact on individual quality of life within a country. Since 2000, residents of mainland China have consistently been exposed to one-hundred percent of the WHO’s estimated safe levels of annual exposure to airborne fine particles (“DataBank,” n.d.). In contrast, residents of the United States have been exposed to only twenty-three percent of acceptable levels (“DataBank,” n.d.). These statistics are represented in Figure 4.

Figure 4: WHO Air Quality Contextualized

Fall 2017 • 47


Since 1990, the World Bank has also tracked carbon dioxide emissions from both the United States and China. From 1990 to 2013, per capita carbon dioxide emissions in China spiked from 2.2 metric tons to 7.6 metric tons annually (“World Development Indicators,” n.d.). In contrast, per capita carbon dioxide emissions in the United States dropped from 19.3 metric tons to 16.4 metric tons annually in the same time period (“World Development Indicators,” n.d.). These statistics are reflected in Figure 5. The relatively high per capita carbon dioxide emissions in the United States are likely due to her relatively small population, compared to China’s population. Nonetheless, the longitudinal trends of carbon dioxide emissions in each country indicate relative environmental improvement or degradation. These trends are the most important component in evaluating the strength of the EKC.

Figure 5: Per Capita Carbon Dioxide Emissions From 1988 to 2010, the United States Geological Survey (USGS) measured groundwater quality around the country through the National Water-Quality Assessment Program (Lindsey & Rupert, 2012). The USGS evaluated water quality in fifty-one river basins and aquifers. These water networks included the Upper Illinois River Basin, the New England Coastal Basins, the Rio Grande Valley, and the Apalachicola-Chattahoochee-Flint River Basin. Water networks were primarily tested using biennial samples from several wells within each area. The United States Geological Survey produced by Lindsey and Rupert (2012) tracked three specific variables over the the twenty-two year period. The USGS measured water quality in terms of chloride, dissolved solids, and nitrate concentrations. When measuring chloride concentration, the USGS reported that

48 • Fall 2017


43% of water networks increased in chloride concentration from 1988 to 2010. In contrast, 4% of water networks in the United States decreased in average annual chloride concentration. Over the two decades that the USGS tracked pollutant data, most water networks in the United States saw an overall increase in chloride concentration (Lindsey & Rupert, 2012). Similarly, Lindsey and Rupert (2012) reported in the USGS that, from 1988 to 2010, 41% of water networks increased in the concentration of dissolved solids within the water supply. Only 2% of water networks recorded a decrease in dissolved solid concentration during the time period. Nitrates, likewise, saw an overall increase in concentration within water networks. However, this increase was not as severe as the increases of other variables. 23% of water networks indicated an increase in nitrate concentration. 9% of water networks reported a decrease in nitrate concentration, a much higher reduction than the percent reduction of the other variables in the study (Lindsey & Rupert, 2012). Across variables, the USGS data indicated that water pollutant concentration increased throughout the United States from 1988 to 2010. This data is reflected in Figure 6.

Figure 6: Change in Water Pollutant Concentration, 1988-2010 From 1990 to 1995, the World Bank tracked the surface water quality of major Chinese rivers using a number of variables. Data was generated from the China Environmental Yearbook, published by China’s National Environmental Protection Agency (“Pollution Indicators,” n.d.). The report tracked the water quality of five rivers: the Songhua River, the Daliao River, the Yellow River, the Yangtze River, and the Huai River. Longitudinal water quality was measured in terms of pH, Suspended Solids (SS), Dissolved Oxygen (DO), and oil, among other variables. Aside from

Fall 2017 • 49


pH levels, all data was reported as annual averages in milligrams per liter (mg/l) (“Pollution Indicators,” n.d.). From 1990 to 1995, the pH of the five rivers tended to fall, indicating the rivers were becoming more acidic and thereby more polluted (“Pollution Indicators,” n.d.). The presence and frequency of suspended solids in the rivers had mixed trends. During the reporting period, frequency of suspended solids increased in the Songhua and Yangtze rivers (“Pollution Indicators,” n.d.). However, the frequency of suspended solids in the Daliao, Yellow, and Huai rivers fell in the same timeframe (“Pollution Indicators,” n.d.). Within environmental quality standards, dissolved oxygen is an important indicator of organic life and overall water quality. Higher frequency of dissolved oxygen indicates better water quality. Like the suspended solids data, changes in the frequency of dissolved oxygen in Chinese rivers was mixed during the five year period. From 1990 to 1995, the Songhua and Daliao rivers improved, and the Huai and Yellow rivers got worse (“Pollution Indicators,” n.d.). Data for the Yangtze River in 1990 was not reported by the Chinese government, making longitudinal analysis impossible (“Pollution Indicators,” n.d.). The frequency of oil particles in Chinese rivers during the five year period generally dropped. The Daliao and Yangtze rivers had decreases in oil particulates, but the Yellow River had an increase (“Pollution Indicators,” n.d.). Data for the Songhua and Huai rivers in 1990 was not reported by China, again excluding these rivers from any longitudinal analysis. Precise statistics from the China Environmental Yearbook, published by China’s National Environmental Protection Agency (“Pollution Indicators,” n.d.), are listed in Table 1.

Table 1: Pollution Indicators

50 • Fall 2017


Conclusion China’s per capita income, as measured by PPP, surpassed the necessary benchmark for EKC-predicted environmental policy between the years 2012 and 2013. However, this relative increase in wealth has not been accompanied by a relative improvement in environmental quality. If the EKC’s original predictions were correct, China’s overall air quality, as measured by PM2.5 concentrations and carbon dioxide emissions, should have begun to improve after 2013. However, the available data indicates that this is not the case. Various indicators of China’s air quality have only continued to worsen since 2013, and the country demonstrates little to no evidence that these trends will be changing sometime in the near future. This disparity indicates that the original EKC hypothesis is not sufficient for predicting the evolution of environmental policies in developing nations like China. Nonetheless, both Chinese and American water quality appeared to decline during the 1990s. While EKC indicates that the United States’ water quality should only continue to improve, the United States demonstrated significant setbacks in national water quality metrics. Although data is not available for Chinese water quality for the entire period from 1988 to 2010, river water degradation from 1990 to 1995 mimicked American trends. This data calls into question the relevance of water quality to discussions of the EKC. Further research should be conducted of developing nations with more readily-available water quality data to determine if the EKC can be a useful predictive element of both air and water quality within countries. While both China and the United States have met the necessary PPP metrics for EKC-triggered environmental quality, there are two important differences between the two countries. Various indicators demonstrate that Chinese citizens lack both the economic and civic freedom that American citizens enjoy. The relative presence of economic and civic freedoms in the United States, where the EKC hypothesis has been successfully applied, may indicate that environmental protection policies are at least partially driven and triggered by the presence of personal liberty. However, variables indicating economic and civic freedom cannot be construed as directly causative. Further research is needed to determine whether or not these variables are merely a chance correlation. Additional longitudinal case studies should be conducted to establish or disprove the relevance of economic and civic freedom to environmental policy. If such a causative relationship can be established, additional longitudinal studies may be helpful to identify the exact intersection point between relative levels of economic freedom, civic freedom, and PPP that trigger environmental policy. Another potential explanation for the failure of the EKC hypothesis in the country of China could be China’s relatively weak middle class. While China’s PPP has grown, her middle class has not. In contrast to

Fall 2017 • 51


upper and lower classes, middle classes have historically been the chief champions of environmental policy in developed nations. At the same time, a higher per capita baseline than that established by Grossman and Kreuger may be needed. Given the findings of this study, the nation of China provides evidence that the EKC theory does not work perfectly in the absence of economic and civic freedoms. While no direct link is proven, there is a strong possibility that there is some relationship between economic and civic freedoms and environmental policy. If this is the case, governments in developing nations should first focus on fostering free civic institutions and lowering barriers to individual and economic liberty. Once these are established and baseline economic prosperity has been achieved, grassroots environmental protection should develop organically, offering a promising solution to a global environmental crisis.

52 • Fall 2017


Reference List Anderson, D., & Cavendish, W. (2001). Dynamic simulation and environmental policy analysis: Beyond comparative statics and the Environmental Kuznets Curve. Oxford Economic Papers, 53(4), 721-746. Bättig, M. B., & Bernauer, T. (2009). National institutions and global public goods: Are democracies more cooperative in climate change policy? International Organization, 63(2), 281-308. Berlinger, J., George, S., & Wang, S. (2017, January 16). Beijing’s smog: A tale of two cities. CNN. Retrieved from china-beijing-smog-tale-of-two-cities/ Callen, T. (n.d.). Purchasing power parity: Weights matter. Finance & Development. Retrieved from basics/ppp.htm Copeland, B. R., & Taylor, M. S. (2004). Trade, growth, and the environment. Journal of Economic Literature, 42(1), 7-71. CPI inflation calculator. (n.d.). Retrieved from Dasgupta, S., Laplante, B., Wang, H., & Wheeler, D. (2002). Confronting the Environmental Kuznets Curve. Journal of Economic Perspectives, 16(1), 147168. DataBank: World development indicators. (n.d.). World Bank. Retrieved March 16, 2017, from aspx?source=2&Topic=6# Dean, J. M. (2002). Does trade liberalization harm the environment? A new test. Canadian Journal of Economics / Revue Canadienne d’Economique, 35(4), 819-842. Felkner, J. S., & Townsend, R. M. (2011). The geographic concentration of enterprise in developing countries. Quarterly Journal of Economics, 126(4), 2005-2061.

Fall 2017 • 53


GDP per capita, PPP (current international $). (n.d.). World Bank. Retrieved March 16, 2017, from PP.CD?view=chart Grossman, G. M., & Krueger, A. B. (1995). Economic growth and the environment. Quarterly Journal of Economics, 110(2), 353-377. Retrieved from Gwartney, J., Lawson, R., & Hall, J. (2016). Economic freedom of the world index. Fraser Institute. Retrieved from default/files/economic-freedom-of-the-world-2016.pdf Gwenhamo, F., Fedderke, J. W., & de Kadt, R. (2012). Measuring institutions: Indicators of political rights, property rights and political instability in Zimbabwe. Journal of Peace Research, 49(4), 593-603. Harbaugh, W. T., Levinson, A., & Wilson, D. M. (2002). Reexamining the empirical evidence for an Environmental Kuznets Curve. Review of Economics and Statistics, 84(3), 541-551. Hsu, A., Yan, C., & Cheng, Y. (2017). Addressing gaps in China’s Environmental Data: The existing landscape. Data-Driven Yale. Retrieved from http:// Analysis_Final.pdf Li, Q., & Reuveny, R. (2006). Democracy and environmental degradation. International Studies Quarterly, 50(4), 935-956. Li, Y., Miao, B., & Lang, G. (2011). The local environmental state in China: A study of county-level cities in Suzhou. China Quarterly, 205, 115-132. Lindsey, B. D., & Rupert, M. G. (2012). Methods for evaluating temporal groundwater quality data and results of decadal-scale changes in chloride, dissolved solids, and nitrate concentrations in groundwater in the United States, 1988–2010. Reston, Virginia: United States Geologic Survey. Retrieved from List, J. A., & Sturm, D. M. (2006). How elections matter: Theory and evidence from environmental policy. Quarterly Journal of Economics, 121(4), 12491281.

54 • Fall 2017


Liu, L. (2008). Sustainability efforts in China: Reflections on the Environmental Kuznets Curve through a locational evaluation of “eco-communities.” Annals of the Association of American Geographers, 98(3), 604-629. Particle pollution (PM). (n.d.). AirNow. Retrieved from cfm?action=aqibasics.particle Pollution indicators and status for China. (n.d.). World Bank. Retrieved from =64214943&theSitePK=469382&contentMDK=20761733 Puddington, A., & Roylance, T. (2017). Populists and autocrats: The dual threat to global democracy. Freedom House. Retrieved from https://freedomhouse. org/sites/default/files/FH_FIW_2017_Report_Final.pdf Tong, Y. (2007). Bureaucracy meets the environment: Elite perceptions in six Chinese cities. China Quarterly, 189, 100-121. Universal ownership: Why environmental externalities matter to institutional investors. (2010). UNEP Finance Initiative. Retrieved from http://www. World development indicators: Energy dependency, efficiency, and carbon dioxide emissions. (n.d.). World Bank. Retrieved April 21, 2017, from http:// Zheng, S., & Kahn, M. E. (2013). Understanding China’s urban pollution dynamics. Journal of Economic Literature, 51(3), 731-772.

Fall 2017 • 55


THE ETHICAL IMPLICATIONS OF MANDATED VACCINATIONS: A UTILITARIAN AND BIBLICAL ANALYSIS Kianna Smith Abstract Vaccines have become increasingly controversial over the years as more parents have raised concerns in regards to their efficacy and safety. This has led some to call for government-mandated vaccines to ensure that the public is protected against infectious diseases. This study considers the ethical implications of government-mandated vaccinations from both utilitarian and Christian perspectives. It examines the evidence on both sides of the debate for vaccines’ safety and efficacy, as well as considers the value of parental rights. This study concludes that, depending on the evidence one appeals to, utilitarianism could support either point of view. A biblical analysis of the issue reveals that, while community is valuable and should be protected, there does not seem to be a scriptural basis for forcing parents to vaccinate their children against their will, regardless of the possible benefits. ___________________________________

56 • Fall 2017


Introduction It was one of the greatest scientific accomplishments of the 20th century. Mankind had finally overcome one of the greatest medical obstacles that existed, and this breakthrough was sure to save lives. After its introduction, thousands were spared from diseases that would have previously amounted to a death sentence. Doctors could now go a step beyond treating illnesses; they could actually ensure their prevention. The revolutionary discovery of the vaccine transformed medicine and put humans one step ahead of viruses and bacteria. Or did it? Though vaccines have been touted as the ultimate solution to many diseases and have been credited with saving thousands or even millions of lives, many people have begun to doubt their perceived benefits. Parents especially have become concerned that vaccines can cause a multitude of adverse side-effects, such as autism, and that they are not effective at preventing diseases in the first place. In response to parents’ outcry, others have pushed back, claiming that science demonstrates that vaccines are beneficial and effective. Proponents of vaccines worry that those who choose not to be vaccinated pose a health risk to the general population and could easily reintroduce diseases that have been eradicated for years. Their solution to this health crisis is to mandate vaccinations for all children. Those who oppose vaccines take issue with this proposed solution, asserting their parental right to make the decisions they believe are best for their children’s health. However, there is concern that respecting parents’ rights on this matter could lead to an outbreak of illness that would have consequences reaching far beyond individual parents and their children. To analyze the competing interests of parental rights and the overall health of a society, this study asks two research questions. First, from a utilitarian perspective, is it ethical to mandate vaccinations against the wishes of parents in order to protect communities at large? Second, what does the Bible have to say about governmentmandated vaccines from an ethical perspective? By examining the scientific facts relating to vaccines and considering the biblical implications of a governmentmandated vaccine policy, this study will answer the research questions as they relate to both utilitarianism and Christianity. This study hypothesizes that utilitarianism cannot come to a definitive judgement and that Christianity gives a precise solution by giving more weight to philosophical matters and relying less on facts that can be reinterpreted based on one’s perspective.

Literature Review A 2013 article by the United States Department of Health and Human Services outlined the importance of vaccines and the concept of herd immunity. It stated

Fall 2017 • 57


that a critical mass of people must be vaccinated in order for an entire community to be protected from a particular disease (“Community Immunity,� 2013). This indicates that in order for the public to be safe from a particular disease, a large percentage of the population must receive their vaccinations. Anderson and May (1985) estimated that, depending on the disease, this number is upwards of 80% (or 90% in some cases). They claimed that, when sufficient herd immunity levels have been reached, the disease will eventually be eliminated (Anderson & May, 1985). Despite the seemingly high percentages that are necessary to eliminate disease, the Centers for Disease Control and Prevention estimated that vaccines have prevented 732,000 deaths in the last 20 years (Whitney, Zhou, Singleton, & Schuchat, 2014). In a 2007 study conducted by Roush and Murphy, the rates of illness and death from vaccine-preventable diseases were examined in relation to the times at which their corresponding vaccines became available. This historical analysis showed that there was a decrease in the number of cases of each disease shortly after their vaccines were introduced, indicating a link between lower rates of illness and the administration of vaccines, even when the required herd immunity levels are not fully reached (Roush, Murphy, & Vaccine-Preventable Disease Table Working Group, 2007). Despite the seemingly strong case for vaccines, multiple studies revealed contradictory results. In a study by Barclay et al. (2012), the malaria virus was shown to have the capability of mutating and becoming more virulent in order to overcome the immunity provided by a vaccine. This raised the concern that the over-administration of vaccines, much like antibiotics in the past, could contribute to the resistance of various viruses and actually cause more diseases than they cure. In another study, Rota et al. (1995) detected measles virus RNA in the urine of children who had received a measles vaccine. This indicates that those who receive the measles vaccine might actually be at risk of spreading it to others through bodily fluids, increasing the risk of illness as opposed to decreasing it. Researchers have even demonstrated the futility of getting yearly vaccines that protect against minor illnesses, concluding that they do not offer the protection many believe they do. A study conducted by McLean et al. (2014) showed that those who received the flu vaccine every year were actually more likely to get the flu than those who were vaccinated less often. Other researchers have pointed to apparent correlations between the administration of vaccines and other health problems. A study by Kemp et al. (1997) discovered a link between childhood vaccination and an increase in cases of asthma and allergies. For example, Kemp et al. found that children who received the diphtheria, pertussis, and tetanus vaccines showed higher rates of asthma and allergies than those who did not receive it.

58 • Fall 2017


Research There are two viewpoints a utilitarian could take depending on the facts he appeals to: one in favor of mandating vaccinations and one against such a policy. The former will be examined first. Utilitarian Perspective: For Mandatory Vaccinations To determine the morality of mandatory vaccines from a utilitarian point of view, one must examine the facts in order to determine the costs and benefits. Some utilitarians argue that, based on the preponderance of evidence, it is immoral not to vaccinate oneself or one’s children because of the negative consequences caused by those who are unvaccinated. They claim that vaccines have saved thousands of lives. According to a report by Whitney, Zhou, Singleton, and Schuchat (2014), vaccines have prevented 322 million illnesses, 21 million hospitalizations, and 732,000 deaths among children born between 1994 and 2013. Medical professionals believe that communities gain the highest amount of protection when a critical mass of people are vaccinated. This is a concept referred to as herd immunity. Roy Anderson and Robert May (1985) explained, The persistence of infectious disease within a population requires the density of susceptible individuals to exceed a critical value such that, on average, each primary case of infection generates at least one secondary case. It is therefore not necessary to vaccinate everyone within a community to eliminate infection; the level of herd immunity must simply be sufficient to reduce the susceptible fraction below the critical point. (p. 323) The article further stated that, in order to achieve herd immunity, 92-96% must be vaccinated against measles and pertussis, 84-88% against rubella, and 88-92% against mumps (Anderson & May, 1985). Despite the possibility of eradicating disease through herd immunity, many have expressed concern that vaccines can cause illnesses, complications, or severe side-effects. The Centers for Disease Control and Prevention addressed this concern in an article on their website titled “Possible side-effects from vaccines” (n.d.), stating that there is a possibility of severe side-effects from vaccines, such as allergic reactions, seizures, deafness, and brain damage. However, the CDC clarified that the risk of experiencing a severe side-effect or death due to a vaccine is extremely small, and the risks associated with contracting a vaccine-preventable illness far outweigh the risk and likelihood of vaccine injury. Nonetheless, Anderson and May (1985) explained that, as the number of people who are vaccinated approaches

Fall 2017 • 59


the critical mass necessary for herd immunity and the risk of getting a vaccinepreventable illness comes closer to zero, the risk of severe side-effects from vaccines begins to overtake the dangers of being unvaccinated. Anderson and May (1985) stated: At the start of a mass-immunization programme, the probability of serious disease arising from vaccination is usually orders of magnitude smaller than the risk of serious disease arising from natural infection. As the point of eradication is approached, the relative magnitude of these two probabilities must inevitably be reversed. The optimum strategy for the individual (not to be vaccinated) therefore becomes at odds with the needs of society (to maintain herd immunity). This issue‌can be overcome by legislation to enforce vaccination (as in the United States), but its final resolution is only achieved by global eradication of the disease agent (so that routine vaccination can cease). (p. 325) Anderson and May took a utilitarian approach to solving the vaccine problem, recognizing the interests individuals have to protect themselves but placing it squarely below the interest of the community to protect all members from epidemics and eventually eradicate these diseases entirely. Since a utilitarian would argue that the most moral action is that which does the greatest good, it seems clear that vaccination passes the test for morality in a utilitarian analysis. However, one could argue that the positive utility gained by vaccination is not as great as it seems, as the severe side-effects of vaccines can cause permanent damage in an individual’s life, while many of the diseases that vaccines protect against are not likely to be fatal or cause permanent injury. Given that the risk of an individual experiencing severe side-effects surpasses the risk of infection as herd immunity is reached, it would appear that there would be more utility in preventing the most immediate risk to the individual that is caused by mass vaccination than in protecting against a disease that herd immunity has ensured a next-to-zero chance of contracting. The problem with this thinking, according to the utilitarian perspective, is that a community must maintain a critical mass of vaccinated members in order to ensure that the chance of becoming infected is as close to zero as possible. If individuals begin to protect their own interests, they will sacrifice the good of the community when vaccination levels fall below the critical mass required for herd immunity. According to this perspective, the only reason that the chance of injury or death from vaccine-preventable diseases is so low is due to the widespread administration of vaccines. Therefore, the utility gained by protecting people from measles or

60 • Fall 2017


polio is much greater than that gained by protecting individuals from side-effects of vaccines, which they have a very slim chance of ever experiencing according to the CDC (“Possible Side-Effects,” n.d.). In order for injuries due to vaccines to be consequential from a utilitarian perspective, they would have to impact a greater number of individuals than infections due to vaccine-preventable diseases. As this is hardly ever the case, one can argue that there is net positive utility gained from community-wide vaccination. Given the evidence in favor of the efficacy and safety of vaccines, a utilitarian who adheres to this perspective would favor government-mandated vaccinations. As explained earlier, if vaccines are effective at preventing disease through herd immunity, it is in the public’s best interest to ensure that as many people as possible are vaccinated. Anderson and May (1985) claimed that the most effective way to reach the required critical mass is to enact public policies that require all people to receive their vaccinations. The moral imperative to do everything necessary to protect citizens seems to necessitate legislation. Critics of this perspective may raise the issue of parental rights. However, under a utilitarian framework, this consideration only serves to strengthen the argument for government-mandated vaccinations. If a parent chooses not to vaccinate their child, the child could become infected with a vaccine-preventable illness and suffer terrible consequences as a result. Children have no means of ensuring their own safety against such pathogens and could lose their lives to their parents’ incorrect choices. Furthermore, violating parental rights does not carry any tangible negative implications. While a child could die from a disease because they were not vaccinated, when the government mandates that a parent vaccinate their child, the parent does not put their life at risk. Instead, all that is lost is the parent’s ability to exercise his personal beliefs and convictions. When a child’s life is on the line, it is not hard to see which outcome is more favorable. Therefore, according to this strand of utilitarianism, there is no reason to oppose legislation that will require all children to be vaccinated. Utilitarian Perspective: Against Mandatory Vaccinations While this ethical question might seem settled from a utilitarian assessment, there are many more sides to the issue. One of the weaknesses of utilitarianism is that it is beholden to the facts it has available to determine likely consequences, and often the facts are unclear. Such is the case with the question of vaccines. Some utilitarians might appeal to contrary evidence and make a case against government-mandated vaccinations on the basis of their questionable efficacy and safety. Though vaccines are intended to produce an immune response in the patient’s body that builds up antibodies in order to create immunity to a particular disease, there is research that shows some vaccines can cause the recipient to shed the virus in their bodily fluids,

Fall 2017 • 61


therefore making them contagious to those around them. A study conducted by Rota et al. (1995) in the Journal of Clinical Microbiology demonstrated that measles virus RNA can be found in the urine of children who have been vaccinated against measles. Because the measles virus can be shed in bodily fluids even fourteen days or longer after administration of the vaccine, those who receive the vaccine could potentially spread the disease to others though they themselves are both vaccinated and asymptomatic. In another study by McLean et al. (2014), those who received flu vaccinations less frequently were seen to have higher levels of protection against the flu. The study found that “[c]urrent and previous season vaccination generated similar levels of protection, and vaccine induced protection was greatest for individuals not vaccinated during the prior 5 years” (McLean et al., 2014, p. 1375). If the flu vaccine was most effective for those who had not received it within the previous five years, perhaps frequent vaccinations are in fact causing a greater susceptibility to the disease they claim to prevent. Other studies point to possible adverse side-effects of vaccines. For example, Kemp et al. (1997) found that vaccinated children are more likely to experience asthma or allergies. Vaccines might also cause a decrease in the natural production of antibodies. Leuridan et al. (2010) published a study demonstrating that women who had been vaccinated had much lower antibody levels than those who had not. Furthermore, they found that the children of vaccinated mothers had lower antibody levels than the levels of children who had unvaccinated mothers (Leuridan et al., 2010). Marcelo Argüelles and his colleagues found that the antibodies created by vaccination are temporary and provide incomplete protection. Their study, which was conducted on Argentinian children vaccinated against measles, showed that while 84% of children ages one through four had what are deemed “protective levels” of antibodies against measles, only 32% of teenagers had antibody levels above what was necessary for protection from the disease (Argüelles et al., 2006). Not only does this demonstrate that vaccine-acquired immunity against measles is temporary, it calls into question the belief that the introduction of vaccines caused a decrease in disease. The measles vaccine was introduced in the United States in 1963 (“Measles History,” n.d.). A 2014 CDC report showed a downward trend in the number of measles cases from 1964-1988, which the CDC attributed to the measles vaccine (“Reported cases,” 2014). However, a utilitarian who opposes mandatory vaccinations would argue that the immunity acquired by the measles vaccine often wears off around one’s teenage years. This means that many of those who received the vaccine in 1963 would have antibody levels below what was necessary to protect them from the disease starting around 1975. A booster shot for four to six year olds was not

62 • Fall 2017


introduced until 1989, and the booster shot given at eleven or twelve years old was not introduced until 1995 (“Age for routine administration,” 1998). This means that for a period of around fourteen to twenty years, pre-teenage children were the only segment of the population with a majority of members who had measles antibodies above protective levels. However, instead of seeing an increase in the number of measles cases from 1975-1989 when older children became susceptible, the same downward trend continues ("Reported cases," 2014). A utilitarian arguing against the utility of widespread vaccination might call into question whether vaccines were truly the reason for the decrease in disease or if there were other factors that would have led to such a decrease if vaccines were never introduced. From a utilitarian perspective, if vaccines are not as effective as they are made out to be, it does not make sense to implement a mandatory vaccination policy. Based on the evidence given against the efficacy of vaccines, it does not appear that there is net positive utility produced through vaccinations. However, there is research on both sides of this debate that leads to contradictory conclusions, so a critic of the utilitarian perspective against vaccines might ask why it is not possible to simply mandate vaccines in the case that they do work and end up preventing disease. One might say that, in addition to the money and time that an individual must devote to comply with the vaccine mandate, there is also the possibility that vaccines are actually contributing to lower immunity and the spread of disease. If this is the case, mandating that all people must receive them is wrong, because individuals are required to give up their time and money to receive a vaccine that will cause more people in society to become sick and die. The negative utility created by vaccines from this perspective outweighs the negative utility that would be created if no one were to receive a vaccine and disease ran rampant. A utilitarian who holds to this perspective might very well argue that the government should mandate no one receive vaccinations unless it can be definitively proven that they prevent disease and do not contribute to other significant illnesses or side-effects. If an action is perceived as a threat to society, a utilitarian will gladly advocate for policy that would force action, even against individual autonomy, to ensure that such consequences never come about, whether that be mandating the administration of vaccines or banning them due to perceived risk. Utilitarianism relies entirely on the facts it has at hand to determine the consequences of an action. When those facts disagree, as in this case, it makes it difficult for a utilitarian to decide what to do. Surely they will examine the evidence and choose what they believe to be the most compelling, but other utilitarians are bound to come to different conclusions about which facts were most convincing. It is extremely difficult to make moral judgments when one’s own method of moral decision-making cannot even agree with itself about the correct course of action to take.

Fall 2017 • 63


Biblical Perspective While utilitarianism tackles the issue of vaccination mandates from a consequentialist mindset, the Bible offers a different way of looking at the matter. Instead of evaluating the possible outcomes of policy that would require vaccines, the Bible focuses on the spiritual and social implications of such a policy. The family unit is especially valuable from a biblical perspective. It is seen as the foundation on which societies are built. The community would not and could not exist without the family. If the family is undermined, the entire community is undermined; therefore, the family is more important than the community and must be protected. Martin Luther explained this idea through his use of the three estates or hierarchies. Price (2015) quoted Luther’s Of the Councils and the Church, in which Luther stated: The first government is that of the home, from which the people come; the second is that of the city, meaning the country, the people, princes and lords, which we call the secular government… Then follows the third, God’s own home and city, that is, the church, which must obtain people from the home and protection and defense from the city. (p. 379) Luther recognized that God ordained the home and the family to be an institution separate from the civil government of society, but one that is related to it and exists in a hierarchy with it. In another of his works, Luther explained the order of this hierarchy, placing the family before the government. He explained that the family or “household government” was set up after the Church and realized through the institution of marriage, as seen in Adam’s union with Eve. The civil government, however, was created after the institution of the family and was not necessary before the Fall, as Luther argued (1535-1536/1958). He went on to state, “Therefore, if man had not become evil through sin, there would be no need of civil government” (Luther, 1535-1536/1958, p. 104). While Luther saw the family as an institution of creation, he believed God ordained government out of necessity due to man’s sin. This means that the family supersedes the society in this understanding of the three estates or orders. Not only was secular government lower in Luther’s hierarchy than the family, but he also concluded that the family is the basis upon which the church and society rest. He said, “Marriage should be treated with honor; from it we all originate, because it is a nursery not only for the state but also for the church and the kingdom of Christ until the end of the world” (Luther, 1535-1536/1958, p. 240). Because of the importance this understanding places on the family unit and the detriment that will come to society if it is weakened, a Christian would likely argue that parental rights on this issue are far more important than ensuring the protection of all members of a community through public policy. A biblical perspective would

64 • Fall 2017


hold that parents bear the responsibility to protect their children in the ways they see fit. This responsibility gives parents the freedom to act without oversight from the government or even the church in many cases. Some might argue that, if parents are not adequately caring for their children, the government and community should step in to ensure that those children are not neglected or harmed. Christians recognize that the Bible does not condone parental mistreatment of children. It instead commands fathers to avoid provoking their children and to bring them up in the instruction of the Lord (Ephesians 6:4, English Standard Version). It would be wrong for parents to neglect or abuse their children to the point of causing them illness or death. At that point, the church and the government would be right to step in and remove the children. However, this does not mean that it is good to remove all of the choices parents are allowed to make for their families in the name of protecting children. A person who holds to a biblical perspective might bring up the concern that allowing the government to take the place of parents sets a dangerous precedent. Every time parents made a decision that could even remotely be construed as harmful to their children, the government could remove parental liberties and force them to take an action that they do not believe is best for their family. As underscored by Luther earlier, a biblical perspective would hold that, apart from clear and obvious neglect or abuse, parents have the liberty and the responsibility to raise their children in the way they believe is best. Vaccinating or not vaccinating one’s child does not lead to clear and obvious neglect. A Christian would likely point out that there are children who are vaccinated every day who live long, healthy lives, and there are also children who are not vaccinated who live equally long, healthy lives. Because of the ambiguity over the issue of vaccines and the biblical precedent to value parental liberty as it strengthens the community as a whole, a Christian looking at the question from this perspective would come to the conclusion that there should not be a policy mandating all children receive vaccinations and that it should be left up to the parents to decide on a family-by-family basis. Some may not be content with this conclusion. It is possible that thousands of children could die because some parents chose not to vaccinate their families. The choice of whether or not to vaccinate is one that affects not just one’s own family but the entire city, county, state, and country a person lives in. Vaccination may be the only way to prevent epidemics, and giving parents the ability to make the wrong choice could be dangerous. However, Christians understand that humans were made in God’s image (Genesis 1:26), which means that they have the ability to act apart from instinct according to their reason and to make choices in line with their reasoning. This ability must be respected, even if it leads to consequences that negatively affect an entire population. A biblical perspective holds that the church or community can intervene to stop an individual’s action when they act immorally

Fall 2017 • 65


or in opposition to Scripture (1 Corinthians 5:12-13), and Luther (1535-1536/1958) upheld that the government exists to punish sin. Nevertheless, the decision to vaccinate or not vaccinate one’s children is not a moral issue dealt with anywhere in the Bible, so the government has no right to force a parent’s hand in the matter. The government would not be acting to prevent an inherently immoral action; instead it would simply be acting to obtain the results it desires while simultaneously acting in direct opposition to the foundational unit of the family. Though it is possible that lives could be saved through such a government mandate, the basis of society, the family, would be undermined. In the end, it is much worse to destabilize society in this manner than it is to allow disease and death to occur as a result of individuals’ choices, especially since Christians believe that the fallen nature of the world makes disease and death inescapable. Christianity does not see physical suffering as the worst possible outcome, so it insists on valuing family over the risk of widespread disease and reasons that society as a whole is better protected when it has strong families as its basis.

Conclusion The utilitarian analysis of the question of mandatory vaccines is unsatisfactory. Instead of offering a clear and absolute solution to the controversial issue, it complicates the matter by allowing a person to justify either outcome depending on the evidence that he favors. Ethical issues certainly cannot be decided in this manner, as it leaves the door open to ambiguity and ensures that the matter will never be settled. On the contrary, an ethical framework should provide a clear-cut solution that is applicable regardless of the changing or bias-laden facts of a case. This allows an agreement to be reached, assuming both parties adhere to the same ethical framework. For this reason, a biblical perspective is superior to the utilitarian method when deciding the morality of a government-mandated vaccine policy. It is internally consistent, and the conclusion one reaches does not evolve depending on the information he uses to make his determinations. In addition, it is based on the inerrant and unchanging Word of God, which is perfect in the ethical judgments it makes and will not lead us astray. For this reason, it can be concluded that a government-mandated vaccination policy is unethical, taking freedom from parents to do what they believe is best for their child and placing that responsibility in the hands of the government, which was created solely for the purpose of restraining sin, not for maximizing the public good in every conceivable way. The family supersedes the government in the hierarchy of established institutions, and usurping the authority of parents over their children can cause nothing but harm to society, regardless of the effects of vaccination.

66 • Fall 2017


Reference List Age for routine administration of the second dose of measles-mumps-rubella vaccine. (1998). Pediatrics, 101(1), 129-133. Retrieved from http://pediatrics. Anderson, R. M., & May, R. M. (1985). Vaccination and herd immunity to infectious diseases. Nature, 318(28), 323-329. Retrieved from https:// Vaccination_and_Herd_Immunity_to_Infectious_Diseases/ links/0deec52b959bdd4c51000000.pdf Argüelles, M. H., Orellana, M. L., Castello, A. A., Villegas, G. A., Masini, M., Belizan, A. L.,...Glikmann, G. (2006). Measles-specific antibody levels in individuals in Argentina who received a one-dose vaccine. Journal of Clinical Microbiology, 44(8), 2733-2738. doi:10.1128/JCM.00980-05 Barclay, V. C., Sim, D., Chan, B. H. K., Nell, L. A., Rabaa, M. A., Bell, A. S.,...Read, A. F. (2012). The evolutionary consequences of bloodstage vaccination on the rodent malaria plasmodium chabaudi. PLOS Biology, 10(7). Retrieved from article?id=10.1371/journal.pbio.1001368 Community immunity (“herd immunity”). (2013). U.S. Department of Health and Human Services. Retrieved from Kemp, T., Pearce, N., Fitzharris, P., Crane, J., Fergusson, D., St. George, I.,... Beasley, R. (1997). Is infant immunization a risk factor for childhood asthma or allergy? Epidemiology, 8(6), 678-680. Retrieved from http://journals. Factor_for_Childhood.15.aspx Leuridan, E., Hens, N., Hutse, V., Ieven, M., Aerts, M., & Van Damme, P. (2010). Early waning of maternal measles antibodies in era of measles elimination: Longitudinal study. BMJ, 340, 1-7. Retrieved from bmj.c1626 Luther, M. (1958). Luther’s works: Lectures on Genesis chapters 1-5 (J. Pelikan, Trans.). St. Louis, MO: Concordia Publishing House. (Original works published 1535; 1536).

Fall 2017 • 67


McLean, H. Q., Thompson, M. G., Sundaram, M. E., Meece, J. K., McClure, D. L., Friedrich, T. C., & Belongia, E. A. (2014). Impact of repeated vaccination on vaccine effectiveness against influenza A(H3N2) and B during 8 seasons. Clinical Infectious Diseases, 59(10), 1375-1385. Retrieved from Measles history. (n.d.). Centers for Disease Control and Prevention. Retrieved from Possible side-effects from vaccines. (n.d.). Centers for Disease Control and Prevention. Retrieved from Price, T. S. (2015). Luther’s use of Aristotle in the three estates and its implications for understanding oeconomia. Journal of Markets & Morality, 18(2), 373-389. Retrieved from php/mandm/article/view/1099/961 Reported cases and deaths from vaccine preventable diseases, United States, 1950-2013. (2014). Centers for Disease Control and Prevention. Retrieved from appendices/e/reported-cases.pdf Rodrigues, P., Doutor, P., Soares, M., & Chalub, F. (2016). Optimal vaccination strategies and rational behaviour in seasonal epidemics. Retrieved from Rota, P. A., Khan, A. S., Durigon, E., Yuran, T., Villamarzo, Y. S., & Bellini, W. J. (1995). Detection of measles virus RNA in urine specimens of vaccine recipients. Journal of Clinical Microbiology, 33(9), 2485-2488. Retrieved from Roush, S. W., Murphy, T. V., & Vaccine-Preventable Disease Table Working Group. (2007). Historical comparisons of morbidity and mortality for vaccine-preventable diseases in the United States. Journal of the American Medical Association, 298(18), 2155-2163. Retrieved from http://jama. Whitney, C. G., Zhou, F., Singleton, J., & Schuchat, A. (2014). Benefits from immunization during the vaccines for children program era – United

68 • Fall 2017


States, 1994-2013. Morbidity and Mortality Weekly Report, 63(16), 352355. Retrieved from mm6316a4.htm

Fall 2017 • 69


HOW MUCH IS A LIFE WORTH? AN ANALYSIS OF THE PROBLEM OF VALUING HUMAN LIFE IN PUBLIC POLICY Thomas Siu Abstract While we often say that a price tag cannot be put on human life, at some point, public policy must do precisely that. In the contexts of government regulation and wrongful death litigation, monetary values are necessarily assigned to individual lives either saved or lost as the result of government or individual action. As such, policymakers and litigators ought to conduct their valuations of human life in a manner that respects human dignity. This study will first evaluate the utilitarian measures for calculating the value of a statistical life (VSL), including both the willingness-to-pay (WTP) and human capital (HK) approaches, and how this approach plays out, particularly when used to evaluate federal regulatory policy. It will then turn to an examination of the casuistry of jury decisions in wrongful death litigation and whether noneconomic damage caps interfere with the proper functioning of that casuistry. Specific examples in both the regulatory and litigation contexts will inform this study’s analysis and allow for a deep examination of the principles underlying valuation decisions in both areas. Finally, this study will provide an overall critique of valuation measures and how they can be employed to provide useful data for making policy decisions while still respecting the dignity of the human person. ___________________________________

70 • Fall 2017


Introduction While we often like to think that no effort is too much to save a life, at some point attempts to preserve life cost resources that could be better allocated to preserve more lives elsewhere. However, without having some valuation system, this type of cost-benefit calculus is impossible to perform. Both regulators and litigators must make these valuation decisions. Jury decisions assessing the value of lives lost must also examine the circumstances of each situation and attempt to provide compensation for the loss of life. The structure of the jury system, and the nature of the decisions it makes possible, closely aligns to the ethical framework of casuistry, as it seeks to apply general principles to specific circumstances. With the introduction of various tort reform measures such as noneconomic damage caps, additional ethical questions arise over whether it is appropriate to interfere with this casuistry. However, the necessity of valuation systems in regulatory and litigation contexts does not insulate those systems from challenges – specifically, public policy must combat the tendency to view human beings solely in economic terms. As a result, the ethical questions surrounding valuation systems include the threshold question of whether values can be assigned to human life without rejecting or diminishing the inherent value of the human person. This study will examine utilitarian measures for assessing the value of a statistical life (VSL) and its use in the federal regulatory context. It will then evaluate the casuistry of jury decision-making in the wrongful death litigation context. The study will finally turn to evaluating these measures holistically and set forth a way in which valuation measures can be used while still respecting the dignity inherent in the human person.

Literature Review Assessing a monetary value to human life has always been problematic. In 1913, Dr. Charles Chapin argued that it was unwise to focus the public health field on attempting to assign a monetary value to human life. However, the majority of those who evaluate these attempts generally assume that such an inquiry is justified and instead question which methodology is most useful for calculating the benefits from reduced loss of life (Rice & Hodgson, 1982). Indeed, Blomquist specifically argued that public policy must always measure the benefits of life-saving efforts to efficiently allocate scarce public resources (Blomquist, 1981). As a result, most evaluations of how public policy assesses values to human life do not question whether such approaches are morally problematic; rather, these evaluations examine the veracity of each particular assessment method. Nonetheless, some still question the efficacy

Fall 2017 • 71


of relying on utilitarian cost-benefit analysis to make major policy decisions, such as William Gorham, the former Assistant Secretary of the Department of Health, Education, and Welfare, who observed, “The ‘grand decisions’ – how much health, how much education, how much welfare and which groups in the population shall benefit – are questions of value judgments and politics. The analyst cannot make much contribution to their resolution” (Wallace, 2012, p. 18). There are numerous methods which attempt to assess economic values to human life, as Brannon noted in his 2004 overview of each type. Two main categories exist. Rice and Hodgson (1982) labeled these categories the “human capital” and “willingness-to-pay” approaches (p. 536). Gold and van Ravenswaay (1984) applied this same dichotomy, though they recognized the possibility of combining the two in an adjusted approach. The first focuses on values that individuals place on their own behavior, while the second attempts to provide an external, objective framework. Researchers typically view the willingness-to-pay (WTP) approach as being most consistent with standard cost-benefit analysis because while human capital (HK) approaches have a theoretical possibility of bringing greater detail to the valuation, they often require extremely difficult comparisons which cut across different spheres of analysis (Abelson, 2003). Most researchers use both WTP and HK approaches to engage in cost-benefit analysis. Landefeld and Seskin (1982) provided an excellent example of both methods of analysis, including comparisons between the two methods. Each methodology seeks to provide the value of a statistical life (or VSL), which is essentially the single best value to place on a human life (Brannon, 2004). Researchers refer to statistical lives because it allows them to break down risk and evaluate percentage changes in risks of specific harms (typically death) rather than examining specific, identifiable deaths (Abelson, 2003). Based on the VSL, researchers can apply a discount rate to calculate the net present value of any particular individual’s statistically expected future earnings (Landefeld & Seskin, 1982). This in turn can be used to produce a more precise figure than the VSL because it accounts for a person’s age. Once a given method has been selected and operationalized, the data it provides can be used to inform decision-making processes. The utilitarian costbenefit analysis is often applied to regulatory environments to allow state and federal regulatory systems to engage in some form of calculus about whether a given regulation should be adopted, rejected, or modified. Wallace (2012) noted that such utilitarian calculus can inform decisions ranging from environmental regulations enforced by the EPA to workplace safety requirements issued by the Occupational Safety and Health Administration (OSHA). While the WTP and HK models are often relied upon for public policy analysis, they do not apply in all contexts. For example, methodologies for evaluating federal

72 • Fall 2017


regulatory systems are rarely the same as those applied to the context of wrongful death litigation (Peeples & Harris, 2015). Instead, juries in wrongful death cases are more fact-specific than generalized – often because of statutory requirements (Peeples & Harris, 2015). When applied to catastrophic (but non-fatal) injuries, jury decisions can also account for remaining quality of life, which itself requires some understanding of the value of a whole life (Torpy, 2004). In the end, the literature examining jury awards seems to suggest that this system relies more heavily on casuistry than on the utilitarian cost-benefit analysis that is commonly associated with the valuation of human life in regulatory actions. The September 11th Victim Compensation Fund is an example of an organization that bridges the gap between statistical measures applied to regulatory policy and fact-specific measures applied to wrongful death litigation (Peeples & Harris, 2015). This fund created a compensation grid based on factors about the deceased individual that could result in payouts ranging from $250,000 to $7 million (Torpy, 2004). In particular, the Victim Compensation Fund included consideration of the economic losses of the victims (Peeples & Harris, 2015), rather than assessing a single value to persons based on the VSL adjusted for age. Because of its specific application of general guiding principles, the Victim Compensation Fund appears to fit within the ethical framework of casuistry.

Data and Methods This study utilizes a primarily qualitative evaluation of the competing methodologies used to assess the financial value of human life. Since some of these competing methodologies are quantitative in their nature, quantitative methodologies will inherently be under examination. However, this study seeks to evaluate these competing methodologies based on factors other than the specific values that they assess. As such, the root questions will be qualitative, as they will delve into the potential problems with each methodology, the impact of each methodology on real world decision-making, and the ethics of utilizing each valuation system. This study therefore assumes that quantitative measures are inapplicable to answering at least some ethical dilemmas; in other words, this study begins with the premise that ethical analysis based on quantitative methodology is logically subsequent to a qualitative determination that quantitative methodology is applicable in the first place. While this is a key assumption, it is justified by the concept that no ethical theory should ever be able to ‘pull itself up by its own bootstraps.’ Rather, a method for making moral decisions should be justified independently of its own ethical framework. Therefore, this study will examine the various utilitarian models (such as the WTP and HK models) and the literature critiquing them. As previously noted, some

Fall 2017 • 73


quantitative analysis will be necessary in order to comprehend these methods and conduct a meaningful examination of their strengths and weaknesses. By evaluating scholars who advocate for the WTP and HK models, the study will be able to conduct a primary analysis of the advantages and potential pitfalls of each. Such an examination will allow this study to draw conclusions about the strengths and weaknesses of the WTP and HK models based on both the primary and secondary analysis. Many scholars have studied jury decision-making dynamics, both in terms of outcomes and methods. Since outcome-based studies are particularly prevalent in the context of valuation systems, this study will largely focus on conducting a qualitative analysis of these outcomes. Nonetheless, when possible, the quantitative decisions of juries will be subjected to examination to paint the fullest possible picture of jury valuation decisions. These decisions seem to fit most clearly within the ethical framework of casuistry because of their relative flexibility compared to the utilitarian calculus employed in regulatory valuation systems. However, tort reform measures (especially damage caps) may interfere with this casuistry. Accordingly, this study will examine whether tort reform measures interfere with the casuistry of jury decisions in any meaningful way and, if so, whether they defeat the ethical judgments made under the casuistry of jury decision-making. In the special issues section, this study will examine one particular case study – the September 11th Victim Compensation Fund – and determine how its compensation grid best fits into the varying ethical frameworks that could be relied upon to make valuation decisions about human life. Once again, this will be primarily qualitative analysis of quantitative data. This study will then examine the question with which any valuation system must wrestle: does attempting to place a monetary value on human life undercut human dignity? This is a qualitative ethical question, and while such an examination could potentially stray far from public policy and into the realms of philosophy or theology, this paper will attempt to limit the extent to which other disciplines are introduced unnecessarily and maintain the focus on the ethical considerations in the public policy realm. Finally, if valuation systems are determined not to be inherently unethical because of a devaluation of the dignity of the human person, this study will conduct a similar analysis of which factors may be legitimately used in such valuation decisions. While this analysis will necessarily incorporate other disciplines, it will continue to focus on the application of these other disciplines to the public policy realm.

Historical Usage Systems for assessing values to human life have existed for a long time. In the United Kingdom, for example, the 1934 Law Reform Act ensured that most causes of

74 • Fall 2017


action could be raised after a person’s wrongful death by his or her estate (Symmons, 1938). This had the net effect of allowing juries to consider how much should be awarded in damages to ‘replace’ deceased persons, such as a 21-year-old worker, whose family was awarded about £1900 in Walton v. Jacob (Symmons, 1938). Notably, in Walton, the jury was instructed that “it was not merely that no sum was large enough to compensate a man of position for submitting to a violent end, but that the mind recoiled from such a problem,” which illustrates that misgivings about valuing human life have been present for at least as long as the attempts to do so themselves (Symmons, 1938). The American history of assessing the monetary value of human life in wrongful death litigation is about as old as the nation itself. One of the earliest reported cases is the 1794 case of Cross v. Guthery, from a Connecticut intermediate appellate court, where the jury awarded £1000 for the wrongful death of a woman in what may be the first recorded medical malpractice case in the independent United States. The precise moment when utilitarian calculus entered the regulatory environment is not clear. Despite that, Chapin’s 1913 article referenced undated (and apparently unavailable) works by Farr, Fisher, and Leighton, who each created their own methodologies for assessing more generalized financial values of the kind that would be useful for regulatory analysis (Chapin, 1913). Obviously, this means that at least some scholars were performing a strand of utilitarian analysis in the early 1900s, though the prevalence of such judgments cannot be accurately assessed without access to additional data. Some scholars trace the start of the use of VSL data in the regulatory context (and particularly the HK approach) to Fein’s 1956 book Economics of Mental Illness (Gold & van Ravenswaay, 1984). Perhaps the increase in the scope of government during the New Deal had time-lagged effects that increased the need to evaluate the effectiveness of regulatory programs, or perhaps some other factor was responsible for the fact that this analysis does not appear to have been either preserved by other works. In the late 1970s and early 1980s, the use of VSL analysis in the regulatory context exploded, seemingly driven in large part by the independent work of Viscusi and Bloomquist. This boom set the stage for other scholars to enter the scene and challenge the preconceptions of WTP and HK analysis and attempt to improve the models to more accurately reflect human decision-making processes and to promote greater regulatory efficiency.

Modern Valuation in the Regulatory Context Utilitarian Methodology Cost-benefit analysis can take multiple different forms based on what precisely a regulator is attempting to measure, whether the costs are known or not, and

Fall 2017 • 75


whether the costs are financial in nature. However, even when costs are not financial in nature, such as the loss of human life or risk of serious physical harm, financial values can nonetheless be assessed to enable a consistent cost-benefit calculus. However, seemingly straightforward risk-risk analysis – the most logical system to use when comparing risk of harm to comparable entities – is actually not quite as simple (or useful) as it seems. Stone (1982) explained the problem by looking at the nonfinancial risk balancing equation for risk-risk analysis: Da=P(Ra± Ma) In the above equation, D represents the risk of a certain unfavorable outcome (in Stone’s example, death), P represents a given population that is assumed to be stable, R is the risk of death from the particular course of action, and M is the margin of error (MOE) of the analysis (Stone, 1982, p. 263). Thus, in order to perform utilitarian cost-benefit analysis from a pure risk-risk (nonfinancial) paradigm, the value of D must be compared for two or more potential policies: 1) Da=P(Ra± Ma) 2) Db=P(Rb± Mb) Whichever policy (A or B) provides the lowest D value is the one that provides the lowest risk and therefore should be adopted. While this is quintessential utilitarian calculus, it does not present enough information to adequately inform public policy. First and foremost, the equation assumes that the R and M values can be known with a scientifically acceptable level of reliability, which is unclear at best (Stone, 1982). Second, the value of P is assumed to be stable and held constant – which Stone notes is often unrealistic (Stone, 1982). Third, the risk-risk equation provides no way to compare risks of different potential harms – it can only compare the probabilities of the same risk of harm. Fourth, even if the R and M values could be known with precision, the M value could still be sufficient to deny certainty as to which is the proper course of action (Stone, 1982). To illustrate this, assume that D represents the risk of death in a certain population if one of two possible policy alternatives is adopted. Further assume that P is held constant at 1,000, Ra is held constant at 0.05, Ma is held constant at ±0.01, Rb is held constant at 0.04, and Mb is held constant at ±0.03: 1) Da=P(Ra± Ma) Da=1,000(0.05 ± 0.01) 40<Da<60 2) Db=P(Rb± Mb)

76 • Fall 2017


Db=1,000(0.04 ± 0.03) 10<Db<70 As the above example illustrates, even in some cases where the exact risk and MOE is known with a high degree of precision, the risk balancing equation is still unable to indicate which policy will produce better results. Either policy could produce fewer (or more) deaths than the other. As such, something that can permit cross-cutting comparisons between different possible risks is necessary. Financialbased risk analysis fills this need. Of course, this will not solve all problems associated with pure risk analysis – any utilitarian calculus necessarily requires knowledge of all consequences of an action to be able to perform accurate ethical calculus, which is an informational problem that will not be solved by merely adding dollar values to the assessments. In the end, the informational problem is a significant factor limiting the usefulness and accuracy of utilitarian methodology for calculating the value of human life. As a result of the shortfalls of risk-risk analysis, scholars and agencies have engaged in the now-traditional task of assigning a monetary value to human life. In the regulatory context, these figures change with inflation and can shift by regulatory agency or even by the context of a given regulation within the same agency. For example, the Office of Management and Budget (OMB) observed that the Food and Drug Administration (FDA) simultaneously used $2.5 million as the VSL for tobacco regulations and $5 million for regulations governing mammograms (Wallace, 2012). However, researchers attempt to provide more consistent values for use in scholarly research. In spite of these attempts at consistency, though, researchers typically produce numbers that “vary wildly between studies,” with numbers ranging from $2 million to $7 million (Brannon, 2004, p. 62). The financial models can then be used to determine whether a given regulation imposes greater costs than society is willing to bear for the increase in safety that it produces. This analysis must consider the impact of the regulatory system on productivity within private enterprises, since private enterprises can be harmed by unexpected or excessive shifts in the regulatory structure (Viscusi, 1983). It is vitally important to perform this analysis carefully, because the impacts can be substantial; one scholar suggests that approximately 30% of the decline in domestic manufacturing growth can be attributed to OSHA and EPA regulations (Gray, 1987). Financially-based utilitarian analysis seeks to introduce a measure that can be manipulated based off a variety of factors to account for additional variables that could impact the moral calculus of regulatory actions. While there are many different methods for performing this calculus, this study will examine the two most prominent: the willingness-to-pay (WTP) model and the human capital (HK) model.

Fall 2017 • 77


Willingness-to-Pay Model The willingness-to-pay (WTP) model is sometimes also referred to as the revealed preferences method (Brannon, 2004). This method provides individuals with the choice of how much they would be willing to pay in exchange for a small percentage decrease in their overall risk of death. Notably, this value does not provide an indicator of the total value of the individual’s life but rather of a given reduction in probability (Gold & van Ravenswaay, 1984). As a result, valuation metrics would have to then make a determination based on this probability and the valuation, which at first seems fairly simple. Landefeld and Seskin (1982) used the following simplified equation to illustrate how total WTP produces a VSL: (AN)/(RN)=V In this equation, A is the aggregate WTP of all members of a population for a given reduction in risk, N is a constant population, R is the risk unit, and V is VSL produced by the equation (Landefeld & Seskin, 1982). Landefeld and Sesking then referenced Jan Paul Acton’s 1973 study, which found that a particular population was willing to pay $76 (A) for a 0.002 reduction in the risk of death by heart attack (R): (AN)/(RN)=V ($76N)/(0.002N)=V ($76)/(0.002)=V $38,000=V In theory, this is a comprehensive indication of the value that individuals place on their lives, so the VSL for this population would be just $38,000 (Landefeld & Seskin, 1982). However, there are some indications that the WTP model may not provide an accurate measurement of the value that individuals place on their lives. Most significantly, the WTP increases in a nonlinear fashion as the risk increases, because as study participants are asked to get closer and closer to certain death, eventually no compensation system will provide sufficient payment for the participant to accept the risk (Brannon, 2004; Gold & van Ravenswaay, 1984). Unfortunately, the WTPVSL equation assumes that the risk acceptance structure behaves in a linear way, so a person would be predicted to pay 10 times what they would be willing to pay to avoid a 10% risk of death than to avoid a 1% risk of death; however, because of the nonlinear exponential increase in the WTP to avoid a risk of death, the WTP-VSL equation cannot provide an accurate VSL. Additionally, because these discrepancies are present in multiple different subsets of the WTP model, it appears that the nature of the human risk acceptance structure is the cause of the nonlinear exponential

78 • Fall 2017


increase in the WTP to avoid a risk of death, rather than some other factor such as self-sorting (Brannon, 2004). This nonlinear increase may help explain why VSL assessments can range wildly. Anecdotally, Landefeld and Seskin (1982) reported that three studies they reviewed assessed VSLs measured through the WTP approach at $38,000, $1.2 million, and $8.4 million. Others likewise noted that WTP measurements can produce VSLs that range from zero or even negative values to over $100 million (Brannon, 2004). Another significant problem with the WTP-VSL equation is that smaller risks may not be easily understood or visualized by the general public when asked to assign a value to the avoidance of a particular risk. As a result, individuals report different values than can be observed from their actual behavior outside the survey setting (Landefeld & Seskin, 1982). If participants do not understand their risk, they cannot be expected to accurately determine their preferences (Gold & van Ravenswaay, 1984). Studies also indicate that it is particularly hard for individuals to consistently evaluate small changes in risk, which can also complicate the analysis and may prevent researchers from developing an accurate understanding of the functioning of the risk acceptance curve (Gold & van Ravenswaay, 1984). Further, perceptions of risk may impact valuations more than the actual risk reported to respondents. Gayer, Hamilton, and Viscusi (2000) performed a WTP analysis based on housing prices near EPA Superfund sites in Michigan, which found that the release of EPA reports on the environmental and health risks of a site had as much as a $700,000 impact on the VSL obtained through WTP metrics. If perceptions of risk are more important than the actual risk itself, it would be nearly impossible to measure the VSL using the WTP model. Attempts to evaluate individual WTP through surveys or other metrics also pose measurement problems. Individuals may report one value but act as if another value is controlling. Participants are likely to understate their WTP if they believe that the information they provide will be used to create a payment structure and are likely to overstate their WTP if they believe the information will be used to create a benefit structure (Gold & van Ravenswaay, 1984). Essentially, people want to obtain the greatest personal benefit for the least personal cost, so they will (consciously or subconsciously) attempt to manipulate the process into providing what they want. Studies that have evaluated ways to control for this bias have suggested that some methods might be effective, but others that are more often relied upon are clearly not (Blumenschein, Blomquist, Johannesson, Horn, & Freeman, 2008). Thus, not every cost evaluation used in WTP analysis will rely on accurate data. These problems with WTP analysis suggest that something more objective is needed, whether it is adding some element of third-party analysis to WTP or

Fall 2017 • 79


transitioning completely from WTP to HK measurement systems. Some researchers have attempted to add criteria to make these determinations more accurate and consistent. Aldy and Viscusi (2008) proposed one such modification to WTP that seeks to consider the effects of age on VSL. This solution is to account for age based on the assumption that “[a]s a worker ages, there are fewer years of remaining life expectancy, implying lower benefits for a given risk reduction, which should reduce the worker’s willingness to pay to reduce that risk” (p. 574). While this does seem to provide data that is more accurate in accounting for other areas that pure risk analysis cannot consider and can provide a more objective metric for WTP analysis, it is ultimately unsatisfying because it makes a key assumption about human behavior without any persuasive justification for such an assumption. Others raise potential factors that are relevant to both the WTP and HK models, such as Shogren and Stamland (2002), who argued that unequal skill in coping with risks results in different individuals having values that are misrepresented by the generalized VSL. They specifically argued that VSLs are typically skewed upward because of the failure of other measures to properly account for this skill. Another potential factor is a given worker’s relative position within the wage bracket for his industry and job, as Kniesner and Viscusi (2005) proposed. However, this can be problematic, as it can be very difficult to determine what a worker’s economic reference group is in order to determine what his or her relative position actually is (Kniesner & Viscusi, 2005). Human Capital Model When operating under the HK model, VSL is typically calculated based on future productive capacity, which is measured by the net present value of an individual’s lifetime expected earnings (Landefeld & Seskin, 1982). Put even more simply, it pegs an individual’s value to their financial earnings during their lifetime (Wallace, 2012). The HK approach assumes that the value that individuals add to society can be evaluated solely by their contribution toward gross national product through production (Gold & van Ravenswaay, 1984). As a result, HK analyses do not account for noneconomic activities such as recreation and leisure and can even ignore pain and suffering (which are often factors in compensation for wrongful death litigation, as will be evaluated later) (Rice & Hodgson, 1982). HK can take into consideration future consumption of the deceased individual by subtracting total forgone future expenses from total lost future wages to produce the total net loss of the individual (Gold & van Ravenswaay, 1984). HK can also account for differences between demographic groups (Wallace, 2012). HK analysis does have a substantial advantage over the WTP approach in that it can provide estimates of the net present value of lost future wages with a greater degree of reliability and consistency than the WTP approach can, as it avoids

80 • Fall 2017


the methodological problems revealed in the WTP approach previously (Rice & Hodgson, 1982). At least in theory, this would result in a more workable metric that can produce more consistent results and thus enable analysis of regulatory costs across different contexts. The WTP approach appears more attractive than the HK approach to some researchers because of its theoretical ability to account for the nonfinancial motivations that HK analysis ignores (despite the fact that WTP seems unable to operationalize this measurement). This is one of the key drawbacks of the HK approach, because by ignoring intangible motivations, it fails to provide an accurate assessment of human behavior and thus to produce a model that can accurately assess financial values of individuals (Gold & van Ravenswaay, 1984). As a result, HK assessments may underestimate the total impact of death (Rice & Hodgson, 1982). For example, it would seem extremely difficult to calculate the financial impact that the loss of a parent would have on a young child – which is an assessment that HK does not even attempt to make. In addition to failing to account for noneconomic activity, the HK model is also not ideal because it does not recognize that different individuals have different attitudes toward risk – which the WTP approach assumes can be controlled for by evaluating the aggregate willingness to pay of a particular group. The same methodology that allows HK to produce more exact economic calculations also means that it must standardize more factors, even though many individuals will not behave within the assumptions the model makes. The ability of the HK model to engage in demographic breakdowns is another point of ethical contention. If individuals of a certain demographic are likely to have higher incomes than member of other demographics (such as white, middle-age males compared to young minority females), that seems to suggest that some demographics are worth more than others (Gold & van Ravenswaay, 1984). To break demographics down even further, do certain consumptive activities that tend to increase income later in life (undergraduate or graduate education) impact an individual’s worth? By extension, this could mean that an individual’s value is determined at least in part by the skills they possess (Shogren & Stamland, 2002). While the HK model solves some of the methodological problems posed by the WTP model, it opens the possibility of many other problems because of the unique limitations of the factors it considers. Because it fails to account for noneconomic factors which may influence the value individuals place on themselves and their family, HK is likely to underestimate the true impact of mortality (Landefeld & Seskin, 1982). However, some argue that because both WTP and HK have flaws in different areas, the best solution is to view them as complementary rather than competing measurement systems, as they measure different elements of value (Rice

Fall 2017 • 81


& Hodgson, 1982). The key problem, as some scholars put it, is to balance accuracy and administrability (Posner & Sunstein, 2005). Having completed the analysis of the various methodologies used for calculating the financial value of human life, this study now turns to an examination of valuation methods in a different context: wrongful death litigation.

Modern Valuation in the Litigation Context Casuistry of Jury Decisions Jury decision-making dynamics are known for being inconsistent from case to case. One scholar found that jury awards for injuries involving quadriplegia and other serious injuries requiring lifelong care ranged from a low of $147,000 to a high of over $18 million (Bovbjerg, Sloan, & Blumstein, 1989). In part, this may be due to the fact that the tort system focuses on individual differences between cases and attempts to provide specifically tailored compensation on a level that would be unthinkable in the regulatory context, which relies on one uniform value being assessed to each life saved by a given policy (Posner & Sunstein, 2005). Some argue that this inconsistency is a harmful result of each jury being isolated from the compensation decisions that any other jury has made and thus suggest that some of the concepts from the regulatory system be brought into litigation to create more uniform results (Lahav, 2012). Others argue that the regulatory system should attempt to provide more individualized assessments such as those that occur in the realm of civil litigation (Posner & Sunstein, 2005). As a whole, juries have wide discretion in determining how much to award a successful plaintiff, though they may receive more substantive guidance from the litigants and their expert witnesses when dealing with economic damages. When juries consider how much to award to a given victim, both parties can provide expert testimony and arguments in favor of and in opposition to any particular method of calculating economic damages (Bovbjerg, Sloan, & Blumstein, 1989). However, when evaluating noneconomic damages, juries receive little guidance other than the subjective approaches that may be presented by the attorneys for evaluating the severity of pain and suffering and thus the damages for such (Bovbjerg, Sloan, & Blumstein, 1989). The problem with this is that it is particularly difficult to assess a monetary value to a non-monetary injury, particularly when there is no objective system for measuring the severity of noneconomic damages (Geistfeld, 1995). Some jurisdictions permit juries to consider a variant of WTP analysis in wrongful death cases called hedonic damages, which is used in an attempt to calculate the loss of pleasure from life, but not all jurisdictions accept this methodology (Geistfeld, 1995). Some commentators argue that the lack of an objective standard also hampers judicial review of jury decisions as part of

82 • Fall 2017


an attempt to ensure greater uniformity of jury decisions (Bovbjerg, Sloan, & Blumstein, 1989; Geistfeld, 1995). In spite of the criticisms of the jury system of compensation, another factor suggests that the tort system has a different purpose than the regulatory system, so there may be compelling reasons to have different methods for valuing human life in the two different systems. Posner and Sunstein (2005) argued that the purpose of the tort system is to both provide deterrence and compensation, which means that the administrative nature of the regulatory system is focused on maintaining optimal risk levels and deterring actions which increase risk above these optimal levels. As a consequence, the tort system has another goal beyond merely ensuring regulatory efficiency – it also seeks to account for the noneconomic factors that HK and WTP analysis often ignore, in addition to the deterrent role that punitive damages serve to dissuade defendants from engaging in the same tortious conduct. Thus, it actually seems that the more individual case-focused orientation of the jury award system is more compatible with the ethical framework of casuistry than with the utilitarian framework that is applicable to the regulatory system. Even though they both attempt to obtain valuations for individual human lives, if the regulatory system and the jury award systems have fundamentally different goals, it seems logical that they would operate in fundamentally different ways. Perhaps, then, the different strengths of various methods of calculating economic value are useful precisely because they provide a diversity of ways to determine what compensation is appropriate rather than relying on a single universal concept that is applied across the board regardless of the facts of a given circumstance. Rather, they seek to provide a myriad of options that can be applied whenever it is appropriate to use one or more of them in a given set of factual circumstances. Thus, those who argue that the systems should be combined may be missing a fundamental point, namely, that the systems are different for a reason and that attempting to combine the two (or, in some cases, subordinate one to the other) runs the risk of depriving relevant decisionmakers of the methodology that may be most useful for finding the financial value of individuals in the circumstances they face. Perhaps the best solution is not to pick one valuation system to use all the time but rather to pick which valuation system is best suited for the task we are attempting to accomplish. When attempting to compensate the survivors of a wrongful death victim, attempting to force each case into a one-sizefits-all compensation scheme would risk removing the nonfinancial elements from the equation and potentially deprive the law of its humanity. Such a system, while perhaps suited for the regulatory environment that seeks to make broad decisions about statistical probabilities, is entirely unfit for a civil justice system focused on providing the right compensation for each particular set of victims. As such, any attempts to forcibly insert more utilitarian principles into the civil jury system

Fall 2017 • 83


should be viewed with skepticism because they threaten to destroy the casuistry upon which it relies.

Special Issues Analysis The analysis of these two systems leaves several important questions still outstanding, most importantly, whether these systems are compatible with respecting the inherent value in human life. This section seeks to explore this question and to evaluate a case study that illustrates the interplay between the two different applications of valuation methodologies. September 11th Victim Compensation Fund Valuation systems need not always fit neatly into the bifurcated system of the utilitarian system suitable for the regulatory context compared against the casuistry of the civil jury award system. Sometimes special circumstances can result in the creation of unique systems designed to compensate those who have suffered loss on a massive scale, such as the congressional compensation scheme for victims of the September 11th terrorist attacks: the September 11th Victim Compensation Fund. The Compensation Fund gave a substantial amount of discretion to Special Master Kenneth Feinberg, who created the Fund’s compensation grid (Lascher Jr. & Martin, 2008). In doing so, he combined the utilitarian approach of regulatory policy with the casuistry of jury awards to bring a greater degree of uniformity and consistency to the casuistry-based system, because the compensation grid used a calculation to replace lost future wages by comparing adjusted income through the expected date of retirement (Posner & Sunstein, 2005). Noneconomic damages were set at fixed levels for survivors, and total compensation for both economic and noneconomic damages ranged from a low of $250,000 to a high of $7.1 million (Posner & Sunstein, 2005). Because this system has neither the deterrent intent of the federal regulatory system nor the extreme focus on particular cases, it is able to take parts of each system and combine them into a coherent whole that has elements of each system but is nonetheless distinct from each. It shares the focus on providing uniform values and structure that the regulatory system seeks to provide, while also retaining some of the casuistry that the jury award system has by allowing for different outcomes based on the circumstances of the victim. The methodology utilized by the Compensation Fund would be unlikely to serve as effectively in either the regulation role or the litigation role, but because the circumstances leading to the creation of the Fund mixes elements of both areas, the Fund’s mix of the two methodologies is able to create a system that effectively operates to provide just levels of compensation to victims.

84 • Fall 2017


Respecting the Inherent Value of Life Attempts to assign an economic value to individuals run an inherent risk of reducing human value to merely economic terms and devaluing the noneconomic dignity possessed by the human person. As a result, we must ask ourselves the threshold question of whether it is ethical to engage in these sorts of valuation decisions in the first place. Traditional Judeo-Christian ethics contains a strong element of the value of each human being – as one author puts it, “A person is more than a demographic statistic... more even than an economic and political being... To treat a person as a person, to respect her rights as a person, therefore respects both God’s handiwork and God himself ” (Holmes, 2007, p. 89). Thus, Christian ethicists may understandably shy away from performing economic valuations of human life. Indeed, even many non-Christian scholars argue that attempting to assign economic values to human life is not only dangerous but also entirely unnecessary. From the days of Chapin arguing that an economic focus on the value of human life would be harmful (Chapin, 1913), a number of scholars have been critical of assessing financial values to human life. Broome (1985) argued that this system is unnecessary and undesirable: If we fix no definite economic value on life, the decisions will still get made as they always have. Like many other hard decisions, they have to be made without the guidance of clear criteria. If they are to be made well, what we most need to improve is the process by which they are made. We need sensitive and humanitarian decision-makers, who will face up to the full difficulty of life-anddeath decisions. But putting a money-value on life helps to make the decisions seem mechanical and easy. We do not want our rulers to be sheltered by their experts from a full appreciation of their responsibilities. (p. 292) Other scholars adopt the opposite extreme and argue that public policy must always examine economic values for life in order to efficiently allocate resources (Blomquist, 1981). These claims seem to be based on the idea that allocation of economic resources must be based on some economic measurements rather than based on noneconomic value judgments. To be sure, in some circumstances (such as wrongful death litigation), a value must be assessed. However, while these economic measurements may be useful (and even sometimes necessary), they can tread a dangerous path, one that leads too close to the devaluation of the inherent dignity of the human person. While they may not necessarily be ethically problematic on their face, they create the possibility of reducing humans to mere economic creatures. Further, with the problems these valuation systems have in the operationalization of

Fall 2017 • 85


the measurement metrics, they may be of only limited use in the first place. As such, it seems that skepticism of economic valuation decisions may be warranted.

Conclusion Attempts to assess a financial value to human life have grown increasingly more common over the past few decades with the expansion of the regulatory state. With more federal regulation, there is a greater need to evaluate the effectiveness of regulatory programs designed to preserve life to ensure that they are as effective and efficient as possible. Even outside this context, valuation decisions are also made in the context of wrongful death litigation by juries. These two different contexts rely on different evaluation methods because they are intended to produce different results based on the extremely different natures of the regulatory system and civil litigation. When elements of these two different contexts are present, a combination of the two systems can be effective at promoting the interests of all relevant parties, as the September 11th Victim Compensation Fund demonstrates. In the end, though, attempts to assign financial values to human life must take great care not to stray too close to dehumanizing persons by viewing them solely in economic terms without recognizing the noneconomic dignity that all human beings possess. When this extreme can be avoided, these valuations can be useful for regulatory analysis as well as for providing the victims in wrongful death claims some measure of financial security. When misused, though, these valuations threaten the humanity of the law by dehumanizing the people the law is supposed to protect.

86 • Fall 2017


Reference List Abelson, P. (2003). The value of life and health for public policy. Economic Record. Retrieved from pa03_health.htm Acton, J. P. (1973). Evaluating public programs to save lives: The case of heart attacks. Santa Monica, CA: Rand Corporation. Retrieved from https://www. Aldy, J. E., & Viscusi, W. K. (2008). Adjusting the value of a statistical life for age and cohort effects. Review of Economics and Statistics, 90(3), 573-581. Blomquist, G. (1979). Value of life saving: Implications of consumption activity. Journal of Political Economy, 87(3), 540-558. Blomquist, G. (1981). The value of human life: An empirical perspective. Economic Inquiry, 19(1), 157-164. Blumenschein, K., Blomquist, G., Johannesson, M., Horn, N., & Freeman, P. (2008). Eliciting willingness to pay without bias: Evidence from a field experiment. Economic Journal, 118(525), 114-137. Bovbjerg, R. R., Sloan, F. A., & Blumstein, J. F. (1989). Public policy: Valuing life and limb in tort: Scheduling “pain and suffering.” Northwestern University Law Review, 83, 938-952. Brannon, I. (2004). What is a life worth? Regulation, 27(4), 60-63. Broome, J. (1985). The economic value of life. Economica, 52(207), 281-294. Chapin, C. V. (1913). The value of human life. American Journal of Public Health, 3(2), 101-105. Cross v. Guthery, Conn. App. LEXIS 20 (1794) Gayer, T., Hamilton, J. T., & Viscusi, W. K. (2000). Private values of risk tradeoffs at superfund sites: Housing market evidence on learning about risk. Review of Economics and Statistics, 82(3), 439-451.

Fall 2017 • 87


Geistfeld, M. (1995). Placing a price on pain and suffering: A method for helping juries determine tort damages for nonmonetary injuries. California Law Review, 83(3), 773-852. Gold, M. S., & van Ravenswaay, E. O. (1984). Methods for assessing the economic benefits of food safety regulations: A case study of PCBs in fish. (Agricultural Economics Report No. 460). Michigan State University, Department of Agricultural Economics. Retrieved from bitstream/201339/2/agecon-msu-460.pdf Gray, W. B. (1987). The cost of regulation: OSHA, EPA, and the productivity slowdown. American Economic Review, 77(5), 998-1006. Hall, R. E., & Jones, C. I. (2007). The value of life and the rise in health spending. Quarterly Journal of Economics, 122(1), 39-72. Holmes, A. F. (2007). Ethics: Approaching moral decisions. Downers Grove, Illinois: InterVarsity Press. Hyman, D. A., Black, B. S., Zeiler, K., Silver, C., & Sage, W. M. (2007). Do defendants pay what juries award? Post-verdict haircuts in Texas medical malpractice cases, 1988-2003. Journal of Empirical Legal Studies, 4(1), 3-68. Kniesner, T. J., & Viscusi, W. K. (2005). Value of a statistical life: Relative position vs. relative age. American Economic Review, 95(2), 142-146. Lahav, A. D. (2012). The case for “trial by formula.” Texas Law Review, 90, 571634. Landefeld, J. S., & Seskin, E. P. (1982). The economic value of life: Linking theory to practice. American Journal of Public Health, 72(6), 555-566. Lascher, E. L., Jr., & Martin, E. E. (2008). Beyond the September 11 victim compensation fund: Support for any future American terror casualties. PS: Political Science and Politics, 41(1), 147-152. McMahan, J. (1988). Death and the value of life. Ethics, 99(1), 32-61. Peeples, R., & Harris, C. T. (2015). What is a life worth in North Carolina? A look at wrongful-death awards. Campbell Law Review, 37, 497-518.

88 • Fall 2017


Posner, E. A., & Sunstein, C. R. (2005). Dollars and death. University of Chicago Law Review, 72, 537-598. Raymond, R. (1999). The use, or abuse, of hedonic value-of-life estimates in personal injury and death cases. Journal of Legal Economics, 9(3), 69-96. Rice, D. P., & Hodgson, T. A. (1982). The value of human life revisited. American Journal of Public Health, 72(6), 536-538. Shogren, J. F., & Stamland, T. (2002). Skill and the value of life. Journal of Political Economy, 110(5), 1168-1173. Stone, A. (1982). Regulation and its alternatives. Washington, D.C.: Congressional Quarterly. Symmons, J. M. (1938). The value of life. Economic Journal, 48(192), 744-748. Torpy, B. (2004). Life - hard to know what price is right. Global Development and Environment Institute. Retrieved from us/Frank/ackerman_ajc_3-04.html Viscusi, W. K. (1979). The impact of occupational safety and health regulation. Bell Journal of Economics, 10(1), 117-140. Viscusi, W. K. (1983). Frameworks for analyzing the effects of risk and environmental regulations on productivity. American Economic Review, 73(4), 793-801. Viscusi, W. K., & Moore, M. J. (1987). Workers’ compensation: Wage effects, benefit inadequacies, and the value of health losses. Review of Economics and Statistics, 69(2), 249-261. Viscusi, W. K. (1988). Product liability and regulation: Establishing the appropriate institutional division of labor. American Economic Review, 78(2), 300-304. Viscusi, W. K. (1994). Mortality effects of regulatory costs and policy evaluation criteria. RAND Journal of Economics, 25(1), 94-109. Wallace, S. J. (2012). How much are you worth? An examination of the value of human life in public policy (Master’s thesis, Georgetown

Fall 2017 • 89


University). Retrieved from bitstream/handle/10822/557656/Wallace_georgetown_0076M_11582. pdf?sequence=1&isAllowed=y

90 • Fall 2017


George Wythe was one of the premier scholars in early American history. The first law professor in America, he was a strong supporter of the War for Independence and a zealous patriot. Many of America’s most influential leaders, such as Thomas Jefferson, studied under his guidance. Furthermore, as a framer of the Constitution, Wythe left a legacy that the United States still honors to this day. The George Wythe Review adopted his name in memory of his brilliant scholarship and in hopes that this journal might emulate Wythe’s dedication to our country.

Fall 2017 • 91


The George Wythe Review would like to gratefully recognize the Collegiate Network for their contribution to the success of our publication.

92 • Fall 2017