Page 1

The Toxics Reader

Collected Chapters on Environmental Health

Island Press Washington • Covelo • London


Contents Toxic Overload, from Evolution in a Toxic World: How Life Responds to Chemical Threats by Emily Monosson The Underside of High Tech, from High Tech Trash: Digital Devices, Hidden Toxics, and Human Health by Elizabeth Grossman The Breath of Our Fathers, from Lives Per Gallon: The True Cost of Our Oil Addiction by Terry Tamminen Pricing the Priceless, from Poisoned for Pennies: The Economics of Toxics and Precaution by Frank Ackerman Diagnosis Mercury, from Diagnosis: Mercury: Money, Politics, and Poison by Jane Hightower Material Consequences: Toward a Greening of Chemistry from Chasing Molecules: Poisonous Products, Human Health, and the Promise of Green Chemistry by Elizabeth Grossman


Introduction More than 100,000 synthetic chemicals are currently used in industrial and consumer products, and only about 3,000 have been studied extensively. No wonder their health effects are the subject of such intense public debate, everywhere from parenting blogs to federal agencies. Consumers, public officials, industrial chemists, academic researchers, and countless other professionals question: does BPA cause cancer? What is a safe level of mercury in fish? Is cost-benefit analysis appropriate in cases of life and death? Can chemicals become “benign by design”? Publishing authoritative books on environmental health, Island Press offers clear, evidence-based analysis of these complex issues. This reader provides a sampling from important books in the field. The first excerpt, from Evolution in a Toxic World by Emily Monosson, explores how the science of toxicology must evolve if we are to gain a better understanding of our increasingly complex chemical landscape. The second excerpt, from Elizabeth Grossman’s High Tech Trash, reveals the health consequences of now-ubiquitous electronic gadgets. Next, a selection from Lives Per Gallon by Terry Tamminen shows why car exhaust may prove just as deadly as cigarette smoke. In the second half of the reader, we turn from questions of science to those of policy and business. Frank Ackerman questions the value of cost-benefit analysis when it comes to toxics and public health in an excerpt from Poisoned for Pennies. In Diagnosis Mercury, Jane Hightower advocates reforms that can protect seafood consumers from mercury. And with a selection from Chasing Molecules, Elizabeth Grossman shows how industrial chemistry can become safer and greener. Each of these excerpts represents just a slice of the information to be found in the full books. And those books represent a much smaller slice of the information needed to make sense of our toxic world. But they also represent a growing understanding. As Island Press continues to build this body of knowledge, we hope that you will continue reading and contributing to the dialogue.


The Toxics Reader Collected Chapters on Environmental Health

Island Press Washington • Covelo • London


Evolution in a Toxic World How Life Responds to Chemical Threats Emily Monosson

Contents

Hardcover $35.00 ISBN: 9781597269766

Preface Acknowledgments Chapter 1: An Introduction PART 1: ELEMENT Chapter 2: Shining a Light on Earth’s Oldest Toxic Threat? Chapter 3: When Life Gives You Oxygen, Respire Chapter 4: Metal Planet PART 2: PLANT AND ANIMAL Chapter 5: It Takes Two (or More) for the Cancer Tango Chapter 6: Chemical Warfare Chapter 7: Sensing Chemicals Chapter 8: Coordinated Defense PART 3: HUMAN Chapter 9: Toxic Evolution Chapter 10: Toxic Overload? Appendix: Five Recent Additions to the Chemical Handbook of Life Notes Selected Bibliography Index


Chapter 10

Toxic Overload?

Complex morphological or life history traits that depend on many generations are less likely to evolve quickly in small-population, longgeneration species. These are the species, including most of the large animals and plants, that are most at risk of extinction due to poor ability to adapt evolutionarily to global change. Stephen Palumbi

The final chapter of this book is truly a work in progress. How will life’s toxic defense mechanisms respond to industrial age chemicals? We are the products of an ancestral line that survived the sun’s intense ultraviolet light, the earth’s metals, the poisons of plants and animals, and even life’s own renegade cells. Whether one takes the long view of evolution or a contemporary view, naturally occurring chemicals have profoundly influenced evolution. As a result, our lives depend on redundant detoxification systems, membranes studded with protein pumps, filter-like organs, and layers of protective skin. Yet in today’s industrialized world, these defenses are challenged in ways unlike any in the past. The opportunity for an evolutionary mismatch is not a matter of if, but when. Life has existed on Earth for over three billion years; modern humans have been here for less than three hundred thousand years. Yet within our incredibly brief tenure on this planet, we have transformed 149


150 e v o l u t i o n i n a t o x i c w o r l d

it to the extent that our impact may join other major events in life’s history, like oxygenation, asteroids, and ice ages, as a cause of major extinctions. Extinction is common; in fact, without ongoing extinction and speciation, we wouldn’t be here. According to Anthony Barnosky and colleagues, major extinctions of the past share several features in common, including rapid rates of loss and the eradication of over 75% of existing species: “Hypotheses to explain the general phenomenon of mass extinctions have emphasized synergies between unusual events. Common features of the Big Five suggest that key synergies may involve unusual climate dynamics, atmospheric composition and abnormally high-intensity ecological stressors that negatively affect many different lineages. This does not imply that random accidents like a Cretaceous asteroid impact would not cause devastating extinction on their own, only that extinction magnitude would be lower if synergistic stressors had not already ‘primed the pump.’”1 If all else were in balance, perhaps our use of industrial chemicals would create nothing more than a blip, affecting only a few localized populations. But combined with climate changes, habitat degradation, spread of invasive species, and overuse of resources, our alteration of the world’s chemistry could contribute to (and perhaps cause) the sixth major extinction. Conclude Barnosky and colleagues, “The huge difference between where we are now, and where we could easily be within a few generations, reveals the urgency of relieving the pressures that are pushing today’s species towards extinction.”2 A dire picture, yet we also know that life is resilient. Resiliency is in our genes, and much of it results from the challenge of maintaining balance amid unrelenting environmental stressors, from physical to chemical. Perhaps the metazoan strategy of multiple lines of defense— in addition to compartmentalization of defenses (protective skin, the filtering role of the liver, and in certain mammals, a multilayered placenta surrounding the fetus) and the capacity for evolution in contemporary time—is adequate. But what does it mean to be adequate? How much of a “hit” can a single cell, an animal, or a population take before it becomes irreparably altered? The DNA repair in our skin cells stems the damage caused by UV rays leaking through the atmospheric ozone layer. This system is and has always been fallible—but is there less room for error with increased need for repair? Then there is groundlevel ozone. How vulnerable are our lung cells to ozone emanating from our cities and nearby regions? And as ozone attacks our lung


Toxic Overload?

151

cells, will their response to increased loads of microscopic particulates—which contribute to the production of reactive oxygen species—compound the damage? At what point does life become overwhelmed? Global atmospheric changes have certainly been associated with major extinctions, and oxygen provides one of the oldest documented examples. Ironically, after most living things had not only developed protections against oxygen toxicity but also become dependent on the gaseous molecule, reductions in oxygen may have sparked one of the first widespread extinctions of complex animal life-forms around five hundred million years ago.3 Our activities may pale in comparison to the rise and fall of oxygen, but we are, for now, the single most important purveyor of change to the earth’s environments. We have made these changes in a blink in time. Though sudden climate change or habitat degradation may be more obviously detrimental, chemical contaminants could set the most vulnerable species on the path to extinction— and they could rapidly and directly affect evolutionary change in others. Over a decade ago, a prominent group of ecologists and geochemists published an article in the journal Science titled “Human Domination of Earth’s Ecosystems.” In their opening paragraph, they wrote, “All organisms modify their environment. . . . Many ecosystems are dominated directly by humanity, and no ecosystem on Earth’s surface is free of pervasive human influence.”4 Since its publication, this article has been cited more than three thousand times. Though focused on dramatic changes—including land transformation, degradation of marine ecosystems, and changes in biogeochemical cycles—the authors also highlighted the importance of synthetic organic chemicals. They noted that in the late 1990s, we produced more than “100 million tons of organic chemicals representing some 70,000 different compounds.”5 Since then, the manufacturing of all chemicals has increased by roughly 15% in the United States alone. In the 1950s, industrial plastic resin production was a mere 3.3 billion pounds; today, the world’s plastics production is over 500 billion pounds, with 300 billion additional pounds added over the past two decades.6 And the persistence of some of these chemicals (specifically, halogenated organics like PCBs, and DDTs and metals like mercury; for details, see the selected chemical profiles in the appendix) means that despite efforts to curtail their release, they continue cycling in the environment. In 2009, the Chemical Abstracts Service, which catalogs and tracks all


152 e v o l u t i o n i n a t o x i c w o r l d

known chemicals, announced the registration of its fifty-millionth “novel” chemical—the last ten million chemicals having been registered over the preceding nine months.7 There are plenty more novel chemicals to be found. According to some estimates, the universe of stable organic compounds ranges anywhere from 1018 to 10200.8 Of the industrial and consumer-use chemicals, the toxicologist Thomas Hartung estimates that two thousand to three thousand have been studied extensively; three thousand to five thousand have been tested using rapid but limited testing protocols; eleven thousand are prioritized for testing; and upward of a hundred thousand are in use, in millions of synthesized substances and in an unlimited number of combinations.9 It is a daunting situation. Many of these chemicals have already made their way into living things, from deep-sea fish to polar bears to humans.10 And although there is growing evidence of contemporary evolution in response to some chemicals in some species, the broader potential for these new chemicals to contribute to environmental selection pressures (whether acting alone or in combination with others, whether naturally occurring or human induced) is unclear. In addition to age-old metals like mercury and cadmium, chemicals that are now a part of life include the “legacy chemicals,” like PCBs and dioxins, and current-use chemicals, like the perfluorinated compounds we depend on for our waterproof and nonstick products, the polybrominated chemicals that act as flame-retardants in plastics and foams, and the bisphenol A in plastics and other products. All of these are new to the earth’s environments and life’s chemistry. Predicting and quantifying the risks from longterm exposures to humans and other species, particularly intergenerational impacts, is one of the greatest challenges facing toxicologists, reproductive biologists, ecologists, managers, and regulators. Prevention is at the heart of chemical regulation. Yet it is a process that has been experiencing its own chemical overload for decades, as regulatory agencies struggle to incorporate cutting-edge analytical techniques while facing a backlog of untested or inadequately tested chemicals.11 In an unprecedented letter published in Science, members of eight different scientific and clinical societies—from the American Society of Human Genetics to the Society for the Study of Reproduction—representing more than forty thousand members, expressed their growing concern over the inadequacy of federal chemical regulators to safeguard Americans from the impacts of chemical exposure: “Most, if not virtually all Americans, are exposed to contaminants in the environ-


Toxic Overload?

153

ment that cause serious health effects in animal models. Direct links to humans remain uncertain, but there is sufficient experimental evidence to raise concern. Furthermore, there is growing evidence that some chemicals once thought to be safe and allowed into common and, in some cases, abundant commercial use may not be as benign as previously assumed.”12 Some commercial chemicals will come and go, leaving little if any trace—even as they cause toxicity to individual members of a species. Others will leave their mark buried deep within the earth’s soils and sediments. And some will leave their mark on life in the form of altered allele frequencies, the result of selective processes. Chemical testing and regulation have no doubt improved when it comes to protecting humans and wildlife from acutely toxic chemicals. Yet the more problematic chemicals are those that slip through unnoticed, causing subtle impacts on biological systems. We overlook the dangers of many chemicals because we fail to understand the biological relevance of the system with which they interact; or we are unable to predict how very small amounts might behave in the presence of other chemicals; or we focus on one response and neglect the networked nature of life’s response to chemicals and other stressors. While predicting toxicity is an ongoing challenge, our ability to detect chemicals has greatly improved—to the extent that an ever-growing list of industrial use and consumer use chemicals are routinely measured in both human and wildlife populations.13 Some of these chemicals have been banned for years, while others remain a large part of our chemical culture. In the appendix, I’ve included very brief profiles of a few select chemicals: PCBs, mercury, CFCs, endocrine-disrupting chemicals in general, and nanoparticles. Most have been discussed in earlier chapters and all are important—not only because of their presence in the environment and therefore in life itself, but also because they serve as an example of the risks we take by allowing the large-scale release of chemicals. These chemicals are, for now and the foreseeable future, a part of life on Earth. Of course, these few examples are just the tip of the chemical iceberg. Consider atrazine, one of the most commonly used pesticides in the world (although banned by the European Union). There are a growing number of papers citing atrazine’s subtle and not so subtle reproductive and developmental effects on frogs, and now fish, at environmentally relevant concentrations. A recent panel convened by the EPA concluded that “atrazine may affect hormonal milieu [in


154 e v o l u t i o n i n a t o x i c w o r l d

women], and possibly reproductive health outcomes.”14 Yet it remains the herbicide of choice in the United States and around the globe. Then there are the flame-retardants known as polybrominated diphenyl ethers (PBDE). These chemicals are structurally related to PCBs, yet the use of PBDEs rose after the banning of PCBs. Like PCBs, these chemicals are now found throughout the world, in living things as far removed from industrial processes and consumer products as Arctic waterfowl, seals, and polar bears. A recent study suggests that PBDEs interfere with women’s ability to become pregnant (i.e., their “fecundability”).15 To date, only certain members of the PBDE chemical family have been removed from the market. Perfluorinated chemicals, used in nonstick and waterproof coatings, have also recently caused concern. These chemicals are longlived, cycling around the environment in unexpected ways. High concentrations cause reproductive and developmental impacts in animal studies, yet it is unknown how the small amounts measured in human blood might affect health and development. Like PBDEs, their pervasiveness in the environment and in humans is enough to cause concern.16 Placing all these chemicals into an evolutionary context, there are many questions we might ask: What are the long-term effects? How will small amounts of these chemicals interact when they mingle in our blood, or within a liver cell or an embryo? How might combinations of these chemicals affect a fish or frog embryo as it also copes with temperature changes? Or alterations in food sources? Or a degraded habitat? Will underlying genetic changes provide these populations with greater resiliency to different threats in the future? Or will they cause a reduction in genetic variability, leaving them more vulnerable? And how might they affect longer-lived species with few offspring— like humans? Advances in toxicology, chemistry, and related fields, combined with aggressive regulation, have led to a dramatic reduction in the indiscriminate release of industrial chemicals, at least in the United States and Europe. Now the challenge has changed from curtailing the obvious to ferreting out the insidious. Toxicology is no longer a “kill ’em and count ’em” science focused solely on gross effects—it hasn’t been for decades, and lethality testing is on the decline. It is now a science charged with identifying subtle, multigenerational, long-term impacts of chemicals. The science of toxicology covers a broad spectrum. On one end, toxicologists seek to understand how chemicals in-


Toxic Overload?

155

teract at the cellular and molecular level. At the other end are those who strive to understand and predict the effects of chronic exposures to complex chemical mixtures on individuals and populations, from microbes to beluga whales. The two ends meet where receptors transport chemicals that turn on DNA, where enzymes are produced in response, and where networks in cell membranes respond in a coordinated manner: the mechanisms of the toxic response. It is here that incorporating evolutionary concepts may be most useful.

(R)evolution at Both Ends of the Spectrum Throughout this book I have referred to the role of omics as a collection of methodologies providing toxicologists, molecular biologists, geneticists, and evolutionary biologists with the tools to dig deeper into the phylogenetic and evolutionary history of genes, proteins, and enzymes. How might toxicologists harness the power of these tools to sift through the overwhelming number of chemicals and identify those presenting the greatest threat, at least to humans? One method has gained a great deal of attention and momentum since it was first proposed in 2007 by the National Research Council’s Committee on Toxicity Testing and Assessment of Environmental Agents and reported in Toxicity Testing in the 21st Century. The committee recommended a transition from whole animal testing, combined with a patchwork of in vitro tests for specific enzyme or receptor function, to an approach that reveals the network of responses by identifying and analyzing the mechanisms and pathways of toxicity (PoT).17 It would be as if one could peer into a human cell and visualize the workings of its biological machinery in response to a chemical intruder. It is a vision requiring a shift in paradigms from defined end points to “identification of critical perturbations of toxicity pathways that may lead to adverse health outcomes.”18 According to committee member Melvin Anderson and committee chair Daniel Krewski, the report has sparked “a healthy and necessary discussion within the scientific community about the opportunity and challenges provided by the NRC vision for the future of toxicity testing.”19 The reactions, they observe, range from “extremely cautious, even pessimistic” to “guardedly optimistic.” While some voiced concerns that the new technique must truly fit the needs of the field and that toxicologists and regulators must not place “overly high expectations” on new methodology, others emphasized the importance of


156 e v o l u t i o n i n a t o x i c w o r l d

appropriately defining adverse effects and the difficulties of translating in vitro effects to whole animals.20 Whatever the outcome of this particular effort to modernize the field, toxicology is at a crossroads. “The path forward,” writes Anderson and Krewski, “will not be easy. It will require hard work, commitment to improving our current test methods, and an ability to make midcourse changes as scientific advances in toxicity testing are realized and the interpretive tools needed to evaluate new toxicity test data mature. The larger question is whether the effort is worthwhile. Our opinion on this remains unchanged. Toxicity test methods need to make better use of human biology and mode of action information to adequately assess risks posed to humans at relevant exposure levels.”21 One approach to improving mode of action will involve genomics combined with proteomics, or toxicogenomics. It is a methodology with the potential to combine the advantages of both reductionist and holistic approaches to chemical toxicity—like a living impressionist painting. Yet making the approach useful, write Daniel Krewski and colleagues in a summary of their NRC report, requires not only selecting key pathways but also a refined sense of the “patterns and magnitudes of perturbations” indicative of adverse effects.22 Taking the approach one step further, Thomas Hartung and Mary McBride suggest aiming for a comprehensive catalog of these paths, including those key responses that are most sensitive and those that might provide some capacity to buffer and prevent toxic effects—in essence, the complete human toxome.23 These pathways likely include some of those discussed throughout this book, which when in response mode or overwhelmed may be a stepping-stone along the path to toxicity. Throughout its existence, life has fended off toxic chemicals—so how many pathways have evolved over billions of years? “At the moment,” observe Hartung and McBride, “any number is pure speculation. . . . Evolution cannot have left too many Achilles heels given the number of chemicals surrounding us and the astonishingly large number of healthy years we enjoy on average. How many PoT there are depends very much on the definition of PoT—what is a PoT on its own, what are variants, what are groups, etc.? Most likely we still focus too much on linear pathways. As we increasingly learn that processes in living organisms are networked we will likely learn that PoT are mostly perturbations of the network, not a one way chain of events.”24 Acknowledging that a toxic response is the outcome of an overloaded network of “normal” responses, seeking out signals of these


Toxic Overload?

157

perturbations at the molecular or mechanistic level would, at the very least, provide insights into the complexities of life’s response to chemicals. It would also present an opportunity to learn about the interactions between critical chemical combinations and cells of different type, age, and sex, and from different human populations. One way to understand the role of PoT, and perhaps to identify those which are key identifiers of chemical perturbations, is to seek out the evolutionary history of these pathways and networks, from original functions to the building and diverging of gene families to their current distribution across species. It is a tall order, yet one that is within closer grasp than ever before. The discussion above belies the continuing tradition of toxicology as an applied science that has served regulators and managers for over a century. Since its origins, toxicology has (like life itself) diverged, duplicated, and evolved. Some of the earliest and most traditional branches included those devoted to the protection of human health, and so, by definition, the individual. This is the goal for the twenty-first century: to better protect individuals. It is a relatively straightforward charge in contrast to that of ecotoxicology. This field emerged in the tumult of the late 1960s and 1970s, following the publication of Rachel Carson’s groundbreaking Silent Spring25 and the advent of the Environmental Protection Agency. By definition, it must confront the complexity of life because it focuses both on individuals and on populations, communities, and ecosystems. One of the more important insights that population studies have afforded us is, as discussed in chapter 9, the influence of toxic chemicals on the evolution of exposed populations. As observations of contemporary evolution become more frequent, we must consider the most insidious consequences of our activities, including the large-scale release of chemical contaminants. The chemicals we have unleashed on the earth are causing genetic change, possibly on a large scale—and we are only just now grasping this reality. As the geneticist John Bickham writes, “Of fundamental importance, it must be emphasized that these genetic impacts are emergent effects not necessarily predictable by study of contaminant exposures or even the understanding of the mechanisms of toxicity, even though contaminant exposure is the root cause of the effects.”26 It is a humbling thought, particularly as toxicologists spar over the definition of adverse effect and shape the way for a new, bottom-up, mechanistic approach to toxicology. To borrow Richard Feynman’s famous phrase, just as “there is plenty of room at the bottom,”27 there is also plenty of room at the top.


158 e v o l u t i o n i n a t o x i c w o r l d

We might do well to keep both ends in mind if we are to protect future generations of humans and wildlife. Before concluding, I must acknowledge that there are many aspects of toxic defense and its evolution that I have either purposely or inadvertently neglected. As stated in the preface, my aim was to present a different context for thinking about toxicology, rather than to provide a complete literature review of all related ideas. Yet there is one very important emerging discipline that demands mention: green chemistry. If successful, this approach would reduce our chemical impact and perhaps make much of the above discussion moot. And it might benefit, if only in subtle ways, by looking to the evolution of life’s responses to toxic chemicals. As a relatively new field popularized over the past two decades, green chemistry aims to design safer chemicals. (Those interested in learning more might begin with the review by the green chemistry pioneers Paul Anastas and Nicolas Eghbali, “Green Chemistry: Principles and Practice.”)28 Recent advances in toxicology, particularly the focus on of how living things interact with chemicals at the molecular and genetic level, in addition to the depth provided by studying the evolutionary history of these responses, will likely benefit practitioners of green chemistry.

A Conclusion Evolves? As we have seen throughout this book, much of the research into the evolutionary roots of life’s responses to toxic chemicals has emerged only over the past decade or two. This melding of toxicology and evolution reflects, in part, the revolution in genomics, the increasing curiosity of biochemical or molecular toxicologists about the origins of interspecies similarities and differences in the biochemical systems and defensive systems to which they’ve devoted their careers, and the growing awareness of the role of toxicants in contemporary evolution. Given the history of life’s relationship with chemicals, one wonders how defenses that evolved for three billion years on a prehuman and preindustrial Earth will fare in this modern world. The easy answer is not well—particularly when we consider the impacts of large amounts of toxic chemicals on individuals, whether mollusks, fish, birds, or humans. The more difficult answer must address what happens when individuals are chronically exposed to small concentrations of the complex mix of chemicals we have released into this world. When DNA


Toxic Overload?

159

photolyase kicks into action in amphibians living at high altitude, what happens if these animals are exposed to the chlorinated pesticides and mercury that rain down from the atmosphere? If our CYP enzymes are increasingly metabolizing a variety of pharmaceuticals, what happens when we add one more, or change our diet, or breathe in chemicals like polyaromatics bound to micron-sized air pollution particulates? Or how might a fish exposed to estrogenic chemicals respond to subtle temperature changes? More important in the end, how might populations of marine worms, fish, birds, or humans respond? For nearly one hundred years toxicology has addressed these questions by taking a relatively top-down and piecemeal approach, moving from observations on the whole animal, to organs, to biochemical changes, and now, more recently, to genes. As toxicologists begin looking from the bottom up, their understanding of how these systems responded to chemicals and developed over time may allow them to integrate traditional techniques with newer approaches—and make them more effective. It has often been written that we live in a sea of toxic chemicals. While this is true today, it was also true eons ago—with one glaring difference. Life on Earth is now subject to a virtual onslaught of chemicals associated in one way or another with human activity. To understand how life might respond to this unprecedented chemical milieu, we must explore how it evolved in the past. This book is meant only as a beginning. My hope is that all branches and levels of toxicology, from the classroom to predictive models, will eventually incorporate broad thinking about evolution. We are a society built on chemicals, and there is no turning back. Yet we can certainly improve how we produce, use, and release chemicals by striving for a better understanding of how they affect wildlife and human health. We have to do so. There is no higher ground, no corner of the earth where life can escape the influence of toxic chemicals. The choice must not be to “evolve or die.”


29478_Island

8/14/06

6:27 AM

Page 1

(continued from front flap)

ADVANCE PRAISE FOR HIGH TECH TRASH “Lizzie Grossman is among our most intrepid environmental sleuths—here she uncovers the answer to one of the more toxic questions of our time, namely—what does happen to that cell phone/laptop/iPod when it breaks and you toss it away? The answer isn’t pretty, but the power with which she delivers it should be the spur to real change.” —Bill McKibben, author of The End of Nature “In lyrical and compelling language, High Tech Trash exposes the ecological underside of the sleek, clean world of electronic communication. Who knew that miniature semi-conductors required such vast amounts of toxic chemicals for their creation? Who knew that these chemicals have now become as globalized as the digital messages their products deliver? From Arctic ice caps to dumps in southern China, Grossman takes readers on an amazing world tour as she reveals the hidden costs of our digital age. This is a story for our times.” —Sandra Steingraber, author of Living Downstream: An Ecologist Looks at Cancer and the Environment

ELIZABETH GROSSMAN is the author of Watershed: The Undamming of America and Adventuring Along the Lewis and Clark Trail and co-editor of Shadow Cat: Encountering the American Mountain Lion. Her writing has appeared in The Nation, Orion, The Seattle Times, The Washington Post, and other publications. She lives in Portland, Oregon.

“[High Tech Trash] will change the way you shop, the way you invest your money, maybe change the way you vote. It will certainly change the way you think about the high tech products in your life.” —Kathleen Dean Moore, author of The Pine Island Paradox

ISBN 1-55963-554-1

JACKET DESIGN BY KAI L. CHU JACKET PHOTOGRAPH © BASEL ACTION NETWORK AUTHOR PHOTOGRAPH BY JOHN CAREY

www.hightechtrash.com

ROSSMAN GGROSSMAN

“In this astonishingly wide-ranging investigation, Elizabeth Grossman exposes the toxic fallout from manufacturing and discarding high-tech gadgetry. A sleuth of silicon, Grossman is endlessly curious, and never content with easy answers. Her news is grim—for our electronic footprint is global—but High Tech Trash offers hope: computers can be less toxic, and they can be designed for reuse and recycling.” —Elizabeth Royte, author of Garbage Land: On the Secret Trail of Trash

H IGH TECH TRASH

The remedies lie in changing how we design, manufacture, and dispose of high tech electronics. It can be done. Europe has already led the way in regulating materials used in digital devices and in e-waste recycling. But in the United States, many have yet to recognize the persistent damage hidden contaminants can wreak on our health and the environment. In High Tech Trash, Elizabeth Grossman deftly guides us along the trail of toxics and points the way to a smarter, cleaner, and healthier Digital Age.

E N V I R O N M E N TA L S C I E N C E / P O L I C Y

HIGH TECH TRASH Digital Devices, Hidden Toxics, and Human Health

ELIZABETH GROSSMAN

WHAT HAPPENS TO YOUR COMPUTER WHEN YOU FINALLY REPLACE IT? To your cell phone, your iPod, your television set? Most likely, they—and untold millions of digital devices like them—will join the global stream of hazardous high tech trash. The Digital Age was expected to usher in an era of clean production, an alternative to smokestack industries and their pollutants. But as environmental journalist Elizabeth Grossman reveals, high tech may be sleek, but it’s anything but clean. From the mine to the lab to the dump, the manufacture and disposal of electronics generates an astonishing volume of toxic materials. Producing a single two-gram microchip can create dozens of pounds of waste, including chemicals that are showing up in our food and bodies. Americans alone discard five to seven million tons of high tech electronics each year, which now make up much of the heavy metals found in U.S. landfills. But the problem goes far beyond American shores, most tragically to China, India, and other developing countries, where shiploads of discarded electronics arrive daily. There, they are “recycled”—picked apart by hand, exposing thousands of workers and residents to toxics. “This is a story in which we all play a part,” Grossman points out.“If you sit at a desk in an office, talk to friends on your cell phone, watch television, listen to music on headphones, are a child in Guangdong, or a native of the Arctic, you are part of this story.” (continued on back flap)

9 781559 635547


High Tech Trash Digital Devices, Hidden Toxics, and Human Health Elizabeth Grossman

Contents

Paperback $22.50 ISBN: 9781597261906

Preface Chapter 1: The Underside of High Tech Chapter 2: Raw Materials: Where Bits, Bytes, and the Earth’s Crust Coincide Chapter 3: Producing High Tech: Environmental Impact Chapter 4: High-Tech Manufacture and Human Health Chapter 5: Flame Retardants: A Tale of Toxics Chapter 6: When High Tech Electronics Become Trash Chapter 7: Not In Our Backyard: Exporting Electronic Waste Chapter 8: The Politics of Recycling Chapter 9: A Land Ethic for the Digital Age Appendix: How to Recycle a Computer, Cell Phone, TV, or Other Digital Devices Notes Selected Bibliography Index


01-Ch1-2

5/1/07

1:21 PM

Page 1

CHAPTER ONE

The Underside of High Tech The rapidity of change and the speed with which new situations are created follow the impetuous and heedless pace of man rather than the deliberate pace of nature.1 —Rachel Carson, Silent Spring, 1962 If future generations are to remember us with gratitude rather than with sorrow, we must achieve more than just the miracles of technology. We must leave them a glimpse of the world as God really made it, not just as it looked after we got through with it.2 —President Lyndon B. Johnson, 1965

A harbor seal arches her back and dives, a graceful comma of brown on the steel blue water of San Francisco Bay. A school of herring darts through the saltwater off the coast of Holland. A polar bear settles down to sleep in a den carved out of Arctic ice. A whale cruises the depths of the North Sea and a chinook salmon noses her way into the Columbia River on her way home to spawn. In the Gulf of Mexico, a bottlenose dolphin leaps above the waves. A seagoing tern lays an egg. A mother in Sweden nurses her baby, as does a mother in Oakland, California. Tissue samples taken from these animals and from these women’s breasts contain synthetic chemicals used to make the plastics used in

1


01-Ch1-2

5/1/07

2

1:21 PM

Page 2

HIGH TECH TRASH

computers, televisions, cell phones, and other electronics resist fire. Americans have the highest levels of these compounds in their blood of any people yet tested, and the same chemicals have been found in food purchased in grocery stores throughout the United States. On the shores of the Lianjiang River in southern China, a woman squats in front of an open flame. In the pan she holds over the fire is a smoky stew of plastic and metal—a melting circuit board. With unprotected hands she plucks out the microchips. Another woman wields a hammer and cracks the glass of an old computer monitor to remove the copper yoke. The leadladen glass screen is tossed onto a riverside pile. Nearby, a man sluices a pan of acid over a pile of computer chips, releasing a puff of toxic steam. When the vapor clears a small fleck of gold will emerge. Up and down the riverbanks are enormous hillocks of plastic and metal, the discarded remains of electronic appliances—monitors, keyboards, wires, printers, cartridges, fax machines, motors, disks, and cell phones—that have all been exported here for inexpensive, labor-intensive recycling. A bare-legged child stands on one of the mounds, eating an apple. At night, thick black smoke rises from a mountain of burning wires. In the southern Chinese city of Guiyu—one of the places in Asia where this primitive recycling takes place—an estimated 80 percent of the city’s 150,000 residents are engaged in processing the million or more tons of electronic waste that have been arriving there each year since the mid-1990s.3 Mines that stretch for miles across the Arizona desert, that tunnel deep under the boreal forests of northern Sweden, and others on nearly every continent produce ore and metals that end up in electronic gadgets on desktops, in pockets, purses, and briefcases, and pressed close to ears all around the world. In a region of the Democratic Republic of the Congo wracked by horrific civil war, farmers have left their land to work in lucrative but dangerous, landslide-prone coltan mines. Sales of this ore, which is used in the manufacture of cell phones and other devices, have helped finance that war as well as the fighting between Uganda and Rwanda in this mineral-rich region of Africa. Although they are mostly hidden, metals make up over half the material in the world’s approximately one billion computers. A typical desktop computer can contain nearly thirty


01-Ch1-2

5/1/07

1:21 PM

Page 3

The Underside of High Tech

3

pounds of metal, and metals are used in all electronics that contain semiconductors and circuit boards (which are themselves 30 to 50 percent metal)—from big plasma screen TVs to tiny cell phones. Extracted and refined at great cost, about 90 percent of the metal that goes into electronics eventually ends up in landfills, incinerators, or some other kind of dump. Traffic on the highway that runs between San Francisco and San Jose is bumper to bumper. Haze rises from the vehicle-clogged road. Office plazas, strip malls, and housing developments stretch out against the backdrop of hills that frame the valley. Pooled beneath the communities of Santa Clara, Cupertino, and Mountain View, California—to name but a few—are thousands of gallons of poisonous volatile organic compounds left by the manufacture of semiconductors. California’s Silicon Valley now has more toxic waste sites subject to cleanup requirements under the federal government’s Superfund program than any other region of comparable size in the United States. In parts of Mountain View, the U.S. Environmental Protection Agency (EPA) has found in groundwater levels of trichloroethylene (TCE)—a solvent used in semiconductor production that the EPA recognizes as a carcinogen—that may be sixty-five times more toxic than previously thought.4 Official estimates say it will take decades, if not a century or more, to complete the cleanup. Families in Endicott and other communities in Broome and Dutchess counties in upstate New York are grappling with the same problem, living above a groundwater plume contaminated for over twenty years with TCE and other solvents used in microchip manufacture. In the high desert country of New Mexico, the ochre and mustard colored cliffs of the Sandia Mountains rise above the Rio Grande valley. Globe mallow and prickly pear sprout from the sandy soil. This is the third most arid state in the nation, and the past decade has been marked by drought. Yet one of the handful of semiconductor manufacturers located near Albuquerque has used about four million gallons of water a day—over thirty times the water an average American household uses annually5—while sending large quantities of toxics into the local waste stream. Similar scenarios have emerged in other parts of the country where semiconductor manu-


01-Ch1-2

5/1/07

4

1:21 PM

Page 4

HIGH TECH TRASH

facture has taken place—among them, the Texas hill country around Austin, the Boston area landscape that gave rise to the American Revolution, and the suburban sprawl that surrounds Phoenix. Residents of Endicott, New York, and Rio Rancho, New Mexico, have asked the Agency for Toxic Substance and Disease Registry (part of the U.S. Department of Health and Human Services) to assess the health impacts of hazardous air pollutants—including trichloroethylene, methanol, ethylene chloride, and several perfluorocarbons—emitted by high-tech manufacturers located in their communities. Semiconductors come off the assembly line in numbers that dwarf other manufactured products, but because microchips are so tiny, we’re less inclined to think about their environmental footprint. One of Intel’s Pentium 4 chips is smaller than a pinky fingernail and the circuit lines on the company’s new Itanium 2 chips are smaller than a virus—too small to reflect a beam of light.6 Producing something of this complexity involves many steps, each of which uses numerous chemicals and other materials and a great deal of energy. Research undertaken by scientists at United Nations University and the National Science Foundation found that at least sixteen hundred grams of fossil fuel and chemicals were needed to produce one two-gram microchip. Further, the secondary material used to produce such a chip amounts to 630 times the mass of the final product, a proportion far larger than for traditional low-tech items.7 In 2004 some 433 billion semiconductors were produced worldwide, and the number continues to grow.8

The Information Age. Cyberspace. The images are clean and lean. They offer a vision of business streamlined by smart machines and high-speed telecommunications and suggest that the proliferation of e-commerce and dot-coms will make the belching smokestacks, filthy effluent, and slag heaps of the Industrial Revolution relics of the past. With this in mind communities everywhere have welcomed high technology under the banner of “clean industry,” and as an alternative to traditional manufacturing and traditional exploitation of natural resources. But the high-tech industry is far from clean.


01-Ch1-2

5/1/07

1:21 PM

Page 5

The Underside of High Tech

5

Sitting at my desk in Portland, Oregon, the tap of a few keys on my laptop sends a message to Hong Kong, retrieves articles filed in Brussels, shows me pictures of my nieces in New York, and plays the song of a wood stork recorded in Florida. Traveling with my laptop and cell phone, I have access to a whole world of information and personal communication—a world that exists with increasingly little regard to geography, as electricity grids, phone towers, and wireless networks proliferate. This universe of instant information, conversation, and entertainment is so powerful and absorbing—and its currency so physically ephemeral—that it’s hard to remember that the technology that makes it possible has anything to do with the natural world. But this digital wizardry relies on a complex array of materials: metals, elements, plastics, and chemical compounds. Each tidy piece of equipment has a story that begins in mines, refineries, factories, rivers, and aquifers and ends on pallets, in dumpsters, and in landfills all around the world. Over the past two decades or more, rapid technological advances have doubled the computing capacity of semiconductor chips almost every eighteen months, bringing us faster computers, smaller cell phones, more efficient machinery and appliances, and an increasing demand for new products. Yet this rushing stream of amazing electronics leaves in its wake environmental degradation and a large volume of hazardous waste— waste created in the collection of the raw materials that go into these products, by the manufacturing process, and by the disposal of these products at the end of their remarkably short lives. Thanks to our appetite for gadgets, convenience, and innovation—and the current system of world commerce that makes them relatively affordable—Americans, who number about 300 million, own over two billion pieces of high-tech consumer electronics: computers, cell phones, televisions, printers, fax machines, microwaves, personal data devices, and entertainment systems among them.9 Americans own over 200 million computers, well over 200 million televisions, and about 200 million cell phones (world cell phone sales topped 1 billion in 2006).10 With some five to seven million tons of this stuff becoming obsolete each year,11 high-tech electronics are now the


01-Ch1-2

5/1/07

6

1:21 PM

Page 6

HIGH TECH TRASH

fastest growing part of the municipal waste stream, both in the United States and in Europe.12 In Europe, where discarded electronics create about six million tons of solid waste each year, the volume of e-waste—as this trash has come to be called—is growing three times faster than the rest of the European Union’s municipal solid waste combined.13 Domestic e-waste (as opposed to e-waste imported for processing and recycling) is accumulating rapidly virtually everywhere in the world that PCs and cell phones are used, especially in populous countries with active high-tech industries like China—which discards about four million PCs a year14—and India. The United Nations Environment Programme estimates that the world generates some twenty to fifty million metric tons of e-waste each year.15 The Wall Street Journal, not known for making rash statements about environmental protection, has called e-waste “the world’s fastest growing and potentially most dangerous waste problem.”16 Yet for the most part we have been so bedazzled by high tech, adopted its products with such alacrity, been so busy thriving on its success and figuring out how to use the new PC, PDA, TV, DVD player, or cell phone, that until recently we haven’t given this waste—or the environmental impacts of manufacturing such products—much thought. Compared to waste from other manufactured products, particularly the kind we are used to recycling (cans, bottles, paper), high-tech electronics—essentially any appliance containing semiconductors and circuit boards—are a particularly complex kind of trash. Soda cans, bottles, and newspapers are made of one or few materials. High-tech electronics contain dozens of materials—all tightly packed—many of which are harmful to the environment and human health when discarded improperly. For the most part these substances do not pose health hazards while the equipment is intact. But when electronics are physically damaged, dismantled, or improperly disposed of, their toxics emerge. The cathode ray tubes (CRTs) in computer and television monitors contain lead—which is poisonous to the nervous system—as do circuit boards. Mercury—like lead—a neurotoxin, is used in flat-panel display screens. Some batteries and circuit boards contain cadmium, known to


01-Ch1-2

5/1/07

1:21 PM

Page 7

The Underside of High Tech

7

be a carcinogen. Electronics contain a virtual alphabet soup of different plastics, among them polystyrene (HIPS), acrylonitrile butadiene styrene (ABS), and polyvinyl chloride (PVC). A typical desktop computer uses about fourteen pounds of plastic, most of which is never recycled. PVC, which insulates wires and is used in other electronic parts and in packing materials, poses a particular waste hazard because when burned it generates dioxins and furans—both persistent organic pollutants.17 Brominated flame retardants, some of which disrupt thyroid hormone function and act as neurotoxins in animals, are used in plastics that house electronics and in circuit boards. Copper, antimony, beryllium, barium, zinc, chromium, silver, nickel, and chlorinated and phosphorus-based compounds, as well as polychlorinated biphenyls (PCBs), nonyphenols, and phthalates, are some of the other hazardous and toxic substances used in high-tech electronics. A 2001 EPA report estimated that discarded electronics account for approximately 70 percent of the heavy metals and 40 percent of the lead now found in U.S. landfills.18 In many places, solvents that have been used in semiconductor manufacture—trichloroethylene, ammonia, methanol, and glycol ethers among them—all of which adversely affect human health and the environment, have ended up in local rivers, streams, and aquifers, often in great volume. Semiconductor production also involves volatile organic compounds and other hazardous chemicals—including methylene chloride, Freon, and various perfluorocarbons—that contribute to air pollution and can potentially adversely affect the health of those who work with them. Numerous lawsuits have already been brought by high-tech workers who believe their health or their children’s has been harmed by chemicals they were exposed to in high-tech fabrication plants. Manufacturing processes and materials change continually and at a pace that far outstrips the rate at which we assess their environmental impacts— particularly in the realm of chemicals, where new compounds are introduced almost daily. Health and safety conditions throughout the high-tech industry have improved over the years, and the business has become more transparent. But the way in which the United States goes about assessing risks posed by chemicals used in high-tech manufacture has not changed,


01-Ch1-2

5/1/07

8

1:21 PM

Page 8

HIGH TECH TRASH

and many of the environmental and health problems now being dealt with were caused by events that took place over twenty years ago. Despite the enormous quantity of electronic waste generated, and the fact that we have been producing this trash at accelerating rates since the 1970s, regulations and systems for dealing with this refuse have only recently been developed and put to work. In this, government policies regulating e-waste in the United States lag conspicuously behind those in Europe and Asian-Pacific countries. As of 2006, electronics recycling is mandatory throughout the European Union (although some countries have delayed compliance) and companion legislation restricts the use of certain hazardous substances in electronic products.19 Recycling is also now mandatory in Australia, Japan, South Korea, and Taiwan, and it soon will be in Hong Kong and Singapore. As of this writing, comparable national legislation has yet to be introduced in the U.S. Current EPA and industry estimates find that no more than 10 percent of Americans’ discarded consumer electronics are being recycled.20 Given the volume of electronics purchased and discarded in the United States, that we rely on voluntary measures to keep high-tech trash from harming the environment is like using a child’s umbrella to stay dry during a monsoon. And despite international regulations designed to prevent the export of hazardous waste from richer to less well-off countries, an estimated 80 percent of a given year’s electronic waste makes its way from countries like the United States and the United Kingdom to poorer countries—like China, Pakistan, India, and those in west Africa—where huge amounts of equipment are dismantled in unsafe conditions or are discarded in ways acutely harmful to the environment.21 No auditable figures are available, but industry experts estimate that about half a million tons of electronics are recycled in the United States annually.22 Because this is no more than a tenth of what is discarded, somewhere between two and four million tons of e-waste from the United States alone has likely been making its way overseas each year for low-tech recycling. A recent study of e-waste in southern China found that about 75 percent of the electronics being processed there came from the United States.23


01-Ch1-2

5/1/07

1:21 PM

Page 9

The Underside of High Tech

9

Some forty years have passed since Rachel Carson caught the world’s attention with Silent Spring. A number of the synthetic chemicals Carson wrote about are now banned, but we continue to create new compounds with persistent adverse environmental and health impacts. Some of these manufactured substances are used to produce high-tech electronics, products that have become virtually ubiquitous throughout the developed world. Many high-tech electronics contain substances whose environmental impacts—local, global, short and long term—have not been dealt with before and which we do not yet understand. As we become increasingly dependent on the rapid electronic transfer of information, while telling ourselves that we are moving beyond the point where economies depend on the obvious wholesale exploitation of natural resources, we are also creating a new world of toxic pollution that may prove far more difficult to clean up than any we have known before.

That we have ignored the material costs of high tech is not surprising. Historically, industrial society has externalized many of the costs associated with its waste, expecting these costs to be borne not by manufacturers or purchasers of the products, but by communities and absorbed by the environment. Until the passage of clean air and water laws, industry could dump its effluent without expecting to be responsible for the consequences. In many ways high tech is a manufacturing industry like any other. But its public profile is very different from that of traditional industries. Because high tech enables us to store encyclopedias’ worth of information on something smaller than a donut, we have—until very recently—overlooked the fact that miniaturization is not dematerialization. As an illustration consider this passage from Being Digital by Nicholas Negroponte, published in 1995, which aptly characterizes the bloom and boom of high-tech culture and how our thinking about high tech tends to divorce the machinery from the information it transmits. (Caveat: In computer-chip generations, a statement about the Digital Age written a decade ago is like looking back at a view of the world penned during the


01-Ch1-2

5/1/07

10

1:21 PM

Page 10

HIGH TECH TRASH

Roaring Twenties—and my aim is not to quarrel with or single out Mr. Negroponte for having made this observation.) The slow human handling of most information in the form of books, magazines, newspapers and videocassettes, is about to become the instantaneous and inexpensive transfer of electronic data that move at the speed of light . . . Thomas Jefferson advanced the concept of libraries and the right to check out a book free of charge. But this great forefather never considered the likelihood that 20 million people might access a digital library electronically and withdraw its contents at no cost.24

The point Negroponte, a professor of media technology at MIT, wished to emphasize was digital technology’s potential to make information universally accessible, presumably without a cash transaction or equivalent thereof. Yet the phrase “at no cost” leaps out because it reinforces the perception that these digital gadgets perform their marvels with no material impacts whatsoever. Where the garbage goes; where a plume of smoke travels; where waste flows and settles when it gets washed downstream; how human communities, wildlife, and the landscape respond to waste. These are costs traditionally outside the scope of the industrial balance sheet and that industry is just beginning to figure into the cost of doing business. As Jim Puckett, director of Basel Action Network, a Seattle-based nonprofit that tracks the global travels of hazardous waste, told me in 2004, “Humans have this funny idea that when you get rid of something, it’s gone.” The high-tech industry is no exception. Laws regulating industrial waste have begun to protect human health and the environment from what comes out of chimneys and drainpipes, yet with few exceptions (e.g., state bottle bills) there is little mandatory collection of used consumer products in the United States. Manufacturers bear little responsibility for the post-consumer disposal of their finished products, and there are few industry-specific, legally binding bans on the use of toxic materials. But in the European Union, laws that became effective in 2005 and 2006 require manufacturers to take back used electronics for recycling


01-Ch1-2

5/1/07

1:21 PM

Page 11

The Underside of High Tech

11

and to eliminate certain hazardous substances from their products. These regulations—known as the WEEE (Waste Electrical and Electronics Equipment) and RoHS (Restriction on the use of Certain Hazardous Substances) directives—are influencing what happens in the United States. Given the global nature of the high-tech industry, these materials standards will, in effect, become world standards, as it’s simply not practical to have different manufacturing streams for individual markets. High tech may thus become one of the first industries being seriously pushed to internalize the costs of waste throughout the products’ life cycle and to design products with fewer adverse environmental impacts. As of early 2007, the United States remains far from enacting any national e-waste legislation.25 Yet over the past several years, more than half of all states have introduced some sort of e-waste bill. Meanwhile, most major high-tech manufacturers have set up some kind of take-back programs to facilitate recycling and reuse of their products. Manufacturers have also been teaming up with retailers, nonprofits, and local governments to hold used-electronics collection events. However, the burden of finding and using these programs still lies entirely with the consumer, and many are far more cumbersome, limited, and costly than comparable programs in Europe and Japan. And despite the fact that the United States has the highest per capita concentration of PCs, research published in 2005 revealed that 95 percent of American consumers do not know the meaning of “e-waste” and 58 percent were not aware of an electronics recycling program in their community.26 However we cope with high-tech trash from now on, it’s important to remember that many generations of this waste have already entered the global environment. As long ago as 1964 President Lyndon B. Johnson cautioned, “The bright success of science also has a darker side.” We must, he said, “control the waste products of technology.”27 But virtually none of the books chronicling the rise of high technology or high tech’s social and cultural influences consider the industry’s impacts on human health or the environment. While knowledge of these impacts has existed for much longer, it has only been since the late 1990s that the world has begun


01-Ch1-2

5/1/07

12

1:21 PM

Page 12

HIGH TECH TRASH

to confront the environmental realities of high-tech manufacturing and ewaste in any substantive way. Spurred by shocking pictures of this waste, the persistence of contaminated groundwater, serious health concerns about chemical exposure, and troubling scientific discoveries, we’re scrambling to catch up. There are many reasons why we have allowed high-tech trash to pile up and pollute. Some are commercial: the historic practice of letting consumers and communities bear the burdens of waste. Some are political: the sway business and industry hold over public policy, particularly in the United States. And some are cultural: our embrace of the new, which seems to go hand in hand with our acceptance of all things disposable. It hasn’t helped us come to grips with high tech’s waste that when thinking about high tech many of us blur the distinction between hardware and software, forgetting that in addition to armies of computer-science jocks encoding the next operating system or search engine, high tech also means tons of chemicals, metals, and plastics. The problems created by high-tech trash, however, cannot be blamed on ignorance of the harm caused by industrial and chemical pollution, for by the time the high-tech industry came of age, professional knowledge and public consciousness of industrial pollution had been thoroughly raised. The tangible effects of e-waste and the environmental and health impacts of high-tech manufacture may be out of sight for many people, but this is by no means a story of abstractions or problems so remote that they can be safely shelved. Nor is it a story that hinges on hair-splitting analyses of risk or an issue frothed up by worried advocates who yearn for simpler times. This is a story in which we all play a part, whether we know it or not. Information-age technology has linked the world as never before, but its debris and detritus span the earth as well. From product manufacture and marketing, raw material collection, order fulfillment, disposal and recycling—and because the cultures and politics of Europe, Asia, and the Americas influence what we consider waste and how we treat it, and because ecosystems do not respect political boundaries—this is an international story. If you sit at a desk in an office, talk to friends on your cell phone, watch television, listen to music on headphones, eat cheese bought


01-Ch1-2

5/1/07

1:21 PM

Page 13

The Underside of High Tech

13

in a supermarket almost anywhere in America, are a child in Guiyu, or a native of the Arctic, you are part of this story.

This is probably a good place to interject that I am not a Luddite and that this book will not be an exercise in technology bashing. I am not anticomputer, I do not hate cell phones, abhor e-mail, or despise the Internet. Like most first-world citizens of the twenty-first century, I rely on these devices for much of my work, some entertainment, and personal communication. But I do not believe that “smart machines” and high-tech electronics can solve problems on their own or that they can replace human or natural creation and interaction. They are simply tools, to be used wisely and with inspiration, or not, as the case may be. The point of this book’s investigations is not to condemn high technology, computers, and all their electronic relations, but to explore how the material demands of the digital age—as currently configured—are affecting the natural world and the health of human communities and how these problems are being addressed. My interest in high-tech waste began a few blocks from my house, on the banks of the Willamette River in Portland, Oregon. In 2000 I wrote a report for Willamette Riverkeeper, a nonprofit river conservation group, investigating the toxics released directly into the Willamette. Thanks to decades of public outcry about the state of the river, many of the older pollutants—sewage, wood-products, and canning waste—that fouled the river for generations had been greatly reduced or eliminated. We wanted to find out how that progress might be holding up. Using information available through the EPA’s Toxics Release Inventory (reporting required of industries that use over a certain volume of toxics monitored by the EPA), we discovered that between 1995 and 1997 alone the volume of toxics released directly into the Willamette Basin doubled, as did the amount of these toxics diverted to public treatment plants. The largest volumes of these chemicals came from factories producing semiconductors and from those processing metals, chemicals, and other materials for high-tech products.28 As in other communities around the country, high tech had been


01-Ch1-2

5/1/07

14

1:21 PM

Page 14

HIGH TECH TRASH

encouraged to settle in Oregon’s Willamette Valley—part of the Pacific Northwest sometimes referred to as the Silicon Forest. The discovery that the high tech—a great economic, and in many ways, social and cultural boon to the Northwest and an industry commonly considered a “clean” or “green” alternative to timber and paper—was a major source of toxic pollution surprised all who read the report. Since 2000, the world’s awareness of e-waste and its impacts has burgeoned. Electronics recycling has become law in Europe, Japan, and other Asian countries. Most major manufacturers have met the European Union’s 2006 deadline for eliminating certain toxics from electronics. In 2002 ten U.S. states considered legislation concerning e-waste disposal. In 2006 over thirty states considered e-waste bills. A handful of states have now passed substantive legislation--among them California, Maine, Maryland, Massachusetts, and Washington. New studies on the impacts of chemicals used in high-tech products, on improvements in equipment design, manufacturing, disposal, and recycling appear almost daily. These topics have been the subject of intense debate on all sides of the Atlantic and the Pacific. Yet an enormous gap remains between what professionals and general high-tech consumers know about the hazards posed by e-waste and the environmental impacts of hightech manufacturing, let alone the importance of solving these problems. I hope this book will help narrow this gap. For it seems to me that without this understanding we will continue to behave as if high-tech products exist in some kind of cyberuniverse, one that has little to do with the air we actually breathe, the water we drink, the food we eat, or our children’s health. The policies being formulated to deal with e-waste would not be under discussion in Brussels or Beijing or Washington, DC, if it were not for years of work, first by environmental and consumer advocates and then by legislators and business leaders who understand that the long-term sustainability of both industry and communities depends on making some important changes. As an environmental issue e-waste may lack the charisma of endangered species, ancient forests, and wilderness, but the push to put e-waste onto policy makers’ agendas has similarly come from the grassroots—from concerned citizens and savvy NGOs in the United States, Europe, Asia, and elsewhere around the world.


01-Ch1-2

5/1/07

1:21 PM

Page 15

The Underside of High Tech

15

The issue of e-waste has brought together activists from Hong Kong, India, London, California, the Philippines, the Netherlands, Texas, Wisconsin, and Seattle, with academics and researchers from China, Sweden, New York, and Tokyo, local and national government officials from around the world, and with executives from the world’s leading high-tech manufacturers, retailers, and recycling and mining companies. There are tensions and great disagreements between environmental advocates and those representing industry and government, and between companies with very different corporate cultures, but this is by no means a story of good guys versus bad guys. It’s more complicated than that. I’ve been to half a dozen or more conferences on electronics recycling and related environmental issues since 2002, each attended by hundreds of industry professionals but by only a small number of environmental advocates. Without prodding from the environmental community, however, I don’t think that any of the changes now taking place would be happening. And it’s a charming irony that none of this global activity—on any side of the environmental activist or industry and government equation—would be possible or as effective without the aid of high technology itself.

In the conclusion of his book Enough, Bill McKibben lists what a number of influential thinkers consider to be the most significant innovations of the twentieth century. The two McKibben himself selects are nonviolence and wilderness. “Nonviolence, wilderness—these are the opposite of catalysts,” he writes. “They’re technologies that act as brakes, that retard our pell-mell rush forward, that set sharp boundaries on where we’re going and how we’ll get there. Right now, they aren’t as important as computers. But one can at least envision a world in which they might be.” And he continues, “We’ve been told that it’s impossible—that some force like evolution drives us on to More and Faster and Bigger. ‘You can’t stop progress.’ But that’s not true. We could choose to mature. That could be the new trick we share with each other, a trick as revolutionary as fire. Or even the computer.”29


01-Ch1-2

5/1/07

16

1:21 PM

Page 16

HIGH TECH TRASH

Engineering and technology alone will not pull us out of the morass of high-tech trash. They must be accompanied by a desire to curtail the use of hazardous chemicals, stop the e-waste from piling up and from infiltrating the landscape, the atmosphere, the world’s wildlife, and our bodies. If we change our culture of instant obsolescence, our penchant for “More and Faster and Bigger,” and our habit of ignoring the health and environmental impacts of manufacture until they have taken their toll, or our habit of tossing trash over the backyard fence in the high-tech arena, there will still be commerce, intellectual and scientific advancement, entertainment, electronic love letters, Listservs, digital relay of pictures, and wireless calls made to check on far-flung friends and family, but it won’t be business as usual. Changes in manufacturing, design, and disposal that will reduce the environmental impact of high-tech electronics are already under way. But a great many more need to be made. I set out to write this book with the hope of illuminating why such changes are so important—and because I believe that the more we know about the environmental and health problems caused by high-tech trash and high-tech manufacturing, and the wider this knowledge is spread, the more quickly these problems may be solved.


Lives per Gallon The True Cost of Our Oil Addiction Terry Tamminen

Contents

Paperback $25.00 ISBN: 9781597265056

Prologue: The Origin of the Specious Chapter 1: The Breath of Our Fathers Chapter 2: A Losing Proposition Chapter 3: Desperate Enterprise Chapter 4: All That Glitters Chapter 5: Wealth Seems Rather to Possess Them Chapter 6: Worse Poison to Men’s Souls Chapter 7: 2025 - A Future without Oil Chapter 8: The Quality of Mercy - Oil on Trial Epilogue: The Seventh Generation Notes Bibliography Index


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM



Page 9

C hap te r 1 

The Breath of Our Fathers

ave you ever really thought about the 3,000 gallons of air you breathe each day? If so, then you know that 99 percent of it is oxygen and nitrogen, which can be metabolized or otherwise transformed in the human body. What might have escaped your notice is the other 1 percent, which is inert argon gas that is not modified, but simply inhaled and exhaled. Because of the finite, unalterable supply of argon in the atmosphere, we share it with every other living thing on our planet and we always have. Harvard researchers have thought a lot about this 1 percent. They calculate that by the age of twenty, each of us has inhaled argon atoms that were literally exhaled by dinosaurs, Gandhi, Shakespeare, and a carpenter from Bethlehem.1 This surprising fact about argon illustrates that we literally share the same air with all living things on the planet, both past and present, making it a unique resource. So how do we treat our air, something that is arguably sacred? One hundred percent of the air we breathe is contaminated with

H

9


Tamminen-02-Ch01 (9-26)

10

7/24/06

11:16 AM

Page 10

LIVES PER GALLON

human-made pollutants, fouled with a toxic stew of oil-related ingredients bearing acronyms like PAHs, PM 2.5, BTEX compounds, and GHGs. In some instances, amounts may be negligible, but the majority of Americans live in areas of air pollution that exceed even the most basic state or federal standards that are designed to protect public health. Although you probably know people who have broken a limb or succumbed to cancer, you may not think you know anyone who actually suffers from disease related to air pollution. Those who sued tobacco companies for hiding the harms of their products were told by the companies that cigarette smoking alone could not be held responsible for the illness and deaths of people like my father. These people, the companies said, were also routinely exposed to other sources of hazardous air pollution, especially from cars. And indeed they were. Research conducted on half a million Americans between 1982 and 1998 confirmed that a lifetime exposure to petroleum pollution exacerbated the effects of smoking, resulting in “enhanced mortality.”2 In short, not just tobacco but petroleum literally began to take away the breath of my father and millions of Americans like him from the day they were born. Yet it is not tobacco that is the biggest threat to human health from the smoke it blows in our faces. It is the petroleum industry that is listed by the Geneva Protocol on Air Pollution as the largest single source of harmful air pollution worldwide.3 Have we made a Faustian deal with the Devil? Are we now indefinitely obliged to pay our part of the bargain in human lives? Fast, seemingly cheap, independent transportation in exchange for higher rates of birth defects, asthma, emphysema, and years shaved from our lives? “It required no science to see that there was something produced in great cities which was not found in the country,” opined Dr. Henry Antoine Des Voeux in his 1905 paper “Fog and Smoke,” the first scientific article to use the term smog.4 For more than a


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM

Page 11

Chapter 1. The Breath of Our Fathers

11

hundred years we have known something about air pollution in general.When did we learn about the specific contribution to smog made by motor vehicles?

What Did Ozzie and Harriet Know? A Brief History of Smog By 1957, when black-and-white television sets broadcast Ozzie and Harriet and Leave it to Beaver—iconic images of a pure American lifestyle in which the greatest problem was a milkman who forgot to deliver the cheese—Congress had already become concerned about the health effects of vehicle pollution. Legislation was introduced that would prohibit any motor vehicle from U.S. roadways that discharged pollution in excess of levels found dangerous by the U.S. Surgeon General.5 Needless to say, that legislation didn’t pass, and by 1961, the U.S. Department of Health estimated that 90 percent of all Americans lived in localities with harmful air pollution that was directly linked to vehicle exhaust.6 By the time the nation lost JFK two years later, our rose-colored glasses began to clear and government agencies told us that every day in the United States, automobiles were discharging 430 tons of nitrogen dioxide (NO2), a harmful pollutant and component of smog.7 Little was done by the federal government to address this growing problem, however, and when Richard Nixon won the White House in 1968, those emissions had increased by more than 50 percent above the levels of just five years earlier. As bras were burned and a war in Southeast Asia was fought and protested, researchers uncovered a correlation between higher death rates and higher smog levels.8 They also found that our elders suffered smog-related illnesses, such as asthma and bronchitis, ten times that of middle-aged Americans and that people in poor health at any age were much more likely to die prematurely if they breathed smog.9 By the time an actor from California left the White House in 1988, evidence from around the world showed that smog has-


Tamminen-02-Ch01 (9-26)

12

7/24/06

11:16 AM

Page 12

LIVES PER GALLON

tened the death of thousands of people every year and caused or aggravated bronchitis, cancers, heart disease, and asthma in millions more.10 My family first moved to Los Angeles in 1963. We lived in an apartment on land that had once belonged to Edgar Rice Burroughs in a suburb called Tarzana, not far from glamorous Hollywood.We sat by the swimming pool, while our relatives in the Midwest shoveled snow from their frozen driveways, mesmerized by the tales of old Hollywood provided by our neighbor Harry, a retired stuntman for Eddie Foy. “That’s haze,” Harry would correct us, if we dared ask about the obscured skyline above the palm trees. “We don’t have smog in Hollywood.” Fortunately, in those days, scientists had more credibility with government regulators than did stuntmen. Although the federal government failed to act, the State of California recognized that weather patterns and mountainous geography conspired to create and magnify a serious smog problem in places such as the Los Angeles basin.11 State officials had formed county air pollution control districts as early as 1947 to combat the growing threat. As a result, and because the federal government ultimately followed California’s legislative lead and enacted national laws and pollution standards, there have been remarkable air quality improvements in the intervening years. Nonetheless, some pollution measurements in the Los Angeles region today remain more than double the U.S. Environmental Protection Agency (USEPA) standards and therefore continue to pose significant health risks.12 Many of the chemicals found in petroleum products—and the air pollution caused by their manufacture, storage, distribution, and combustion—are defined by the USEPA as “materials that cause death, disease, or birth defects in organisms that ingest or absorb them.”13 Although much of this discussion draws on the California experience for examples, there are, as the American Lung Association


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM

Page 13

Chapter 1. The Breath of Our Fathers

13

pointed out in early 2006, more than 152 million people who live in areas of the United States where the air quality puts their health at risk.14 Sorry, Harry, but that includes Hollywood.

Secondhand Petroleum Smoke: The Six Most Dangerous Pollutants in Smog After more than a hundred years of research, many of the health effects of smog are well understood, yet they are still very much out of control, both in the United States and in a growing number of cities around the globe. Detailed descriptions of the effects on human health and the environment are described later in this chapter, but first let’s examine the six most toxic constituents of the inescapable secondhand smoke we create by the combustion of millions of barrels of petroleum in the United States every day.15 1. Particulate matter. Have you ever sat in traffic behind a big truck or an old school bus and experienced the black soot that belches from the smokestack or tailpipe when it starts up or accelerates? That’s particulate matter (PM), measured as large particles of 10 microns or less, and small ones of 2.5 microns or less, called PM 10 and PM 2.5, respectively.To get an idea of this scale, consider that the human eye can detect objects as small as 35 microns (or millionths of a meter) and that a grain of table salt is about 100 microns. Soot from fires and dust from unpaved roads, agriculture, or construction generate significant amounts of PM 10 in addition to the amount pumped into our air by the incomplete burning of petroleum fuels.That’s what the soot from the school bus really is—petroleum that hasn’t burned in the engine, especially a diesel engine. The smaller particles, PM 2.5, that are even more insidious in terms of human health are generated from some industrial smokestacks, but the major culprits are emissions from vehicle


Tamminen-02-Ch01 (9-26)

14

7/24/06

11:16 AM

Page 14

LIVES PER GALLON

tailpipes and the exhaust of planes, ships, trucks, and trains.16 These fine particles are especially toxic, causing respiratory ailments, cardiopulmonary disease, premature death, low birthweight babies, and infant deaths. Research reveals that illnesses, such as asthma and lung cancer, and death rates rise on days when the amount of particulate matter in the air also rises. Conversely, evidence shows the benefits of decreasing particulate matter in the air: illnesses and death rates drop.17 2. Volatile organic compounds. Volatile organic compounds (VOCs) include a wide variety of products that easily evaporate, hence the term volatile.The fumes from paint, pesticides, cosmetics, and solvents and the distinctive odor you notice when you pump gasoline are examples of VOCs. The VOCs in petroleum products, including benzene, 1,3-butadiene, and polycyclic aromatic hydrocarbons (PAHs), are particularly toxic and are known to cause cancer. Petroleum VOCs are especially insidious because they are not only emitted from the tailpipe while the engine is running, but they also fill the air above most parked vehicles, especially on hot days as the fuel expands in the engine and fuel lines.VOCs are known carcinogens and reproductive toxins causing leukemia, lymphatic tissue cancers, birth defects, bronchitis, and emphysema. Benzene is the “bad boy” of VOCs, with a growing body of evidence showing that health harms from inhaling it are significant even at very low levels.18 3. Ozone. “Asthma now hits 1 in 10 children, study says,” blares the headline from the Canadian Globe and Mail on January 27, 2006. The newspaper article describes the surprise of regulators and researchers alike, but Canadians could have read even more ominous findings in the Fresno Bee as early as December 2002, when that paper ran the story “Last Gasp,” describing that one in six children in Fresno, California, carries an asthma inhaler to school.


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM

Page 15

Chapter 1. The Breath of Our Fathers

15

Or during the 2000 Summer Olympics in Atlanta, Georgia, when researchers discovered that decreased traffic in the metropolitan area over that two-week period of the games correlated to decreases in ozone levels and fewer hospital emergency visits by children complaining of breathing difficulties.19 Although ozone in the upper atmosphere shields Earth from the sun’s harmful ultraviolet radiation, high concentrations at ground level are a threat to human health, animal health, and plant life. Ozone is formed when reactive organic gas (ROG) and nitrogen oxides react with sunlight. ROG comes from a variety of organic sources. For example, emissions from cows around Fresno supply almost as much ROG as does the refining and burning of fossil fuels in the same area. Of course, in most modern cities without a great deal of livestock, the bad actor is petroleum-based ROG, not the dairy herd. Ozone acts like an acid on the lungs, causing and aggravating asthma, but it doesn’t stop there. It also causes other respiratory ailments and impairs lung function, harms the immune system, aggravates heart disease and emphysema, and can cause fetal heart malfunctions. 4. Nitrogen dioxide. One of the charming bromides of Harry, the aging stuntman from the golden era of Hollywood, was an unwitting response to nitrogen dioxide, or NO2.“I don’t trust air I can’t see,” he would say, laughing and pounding his fist on the rusted metal patio table. It didn’t strike me as odd at the time, but he always wheezed as he laughed. Harry had a lot to trust in his hometown.The brownish tinge to Los Angeles’s skies in the 1960s, and the skies over many other metropolitan areas today, comes from NO2, a highly reactive organic gas that irritates the lungs and causes both bronchitis and pneumonia. It can also aggravate respiratory disease and lower resistance to respiratory infections. NO2 is one of many oxides of


Tamminen-02-Ch01 (9-26)

16

7/24/06

11:16 AM

Page 16

LIVES PER GALLON

nitrogen (NOx), all of which are a part of those ROGs that help form ozone. 5. Carbon monoxide.As unsavory as the first four air pollutants on this list are, carbon monoxide (CO) is especially insidious because it is colorless, odorless, and highly poisonous. In short, it is a sneak thief of breath. Having read this far, you will not be surprised to learn that the majority of CO in the air comes from vehicles burning petroleum fuels. A smaller amount comes from wood-burning stoves, industrial smokestacks, and even the use of natural gas in your kitchen oven. No matter the source, this sneak thief robs the blood of oxygen, which ultimately can cause asphyxiation and death.When inhaled by pregnant women, CO can threaten fetal growth and mental development of the child. 6. Lead. Although lead was eliminated from most gasoline in the United States starting in the 1970s, it continues to be used in aviation and other specialty fuels today. When it was ubiquitous in gasoline, lead from fuel exhaust polluted soils along roadways. Those polluted soils dry out and become airborne, so lead is still thereby readily inhaled.20 In addition, coal-burning power plants dump significant volumes of lead into the air each year. Lead pollution delivers one of the more Darwinian consequences of all the self-inflicted wounds wrought by human-made air pollution. In men, it reduces sperm count and creates abnormalities in what’s left. In women, it reduces fertility and can cause miscarriages. Children are especially vulnerable because they absorb lead more readily than do adults.As their brains and nervous systems develop, children may suffer significant learning disabilities and hyperactivity as a result of lead exposure. For children and adults, lead is a neurotoxin even at very low levels, affecting the circulatory, reproductive, nervous, and renal (kidney) systems. Furthermore, those low levels “bioaccumulate”


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM

Page 17

Chapter 1. The Breath of Our Fathers

17

in bone and other tissues, meaning that part of each dose is stored in the body and is a poison that “keeps on giving.” Although these effects have been well understood for decades, oil companies continued to sell leaded gasoline around the world long after the United States phased it out.Venezuela got the lead out in late 2005, for instance, but it is still found in the gasoline of at least twenty other countries today.21

Smoking Guns: The Surprising Similarities of Tobacco and Oil Of the constituents in secondhand petroleum smoke, lead is the only one that isn’t also found in tobacco smoke. The health damage caused by both tobacco and petroleum are also similar in many ways. Table 1.1 lists the most common and the most toxic constituents that are found in both tobacco smoke and vehicle exhaust.Whether you inhale from a cigarette, breathe in secondhand tobacco smoke, or simply breathe the air in most parts of the industrialized world, you

IN

TABLE 1.1. HUMAN HEALTH TOXINS FOUND BOTH TOBACCO SMOKE AND VEHICLE EXHAUST

Pollutant

Associated Health Effects

Benzene

Cancer; respiratory and reproductive toxicity

PAHs (polycyclic aromatic hydrocarbons)

Cancer; immune system toxicity

1,3-butadiene

Cancer

Formaldehyde and acrolein

Respiratory illness, cancers

Carbon monoxide (CO)

Respiratory illness; cardiovascular toxicity; asphyxiation

Heavy metals

Cancer; neurotoxicity

Hexane

Neurotoxicity

Acids

Lung irritation and damage

Source: “How Do Tobacco Smoke and Car Exhaust Compare?” Energy Independence Now, 2003. Available online at www.EnergyIndependenceNow.org.


Tamminen-02-Ch01 (9-26)

18

7/24/06

11:16 AM

Page 18

LIVES PER GALLON

are inhaling benzene, polycyclic aromatic hydrocarbons, carbon monoxide, and a host of other toxins. No matter the source, inhaling these pollutants can cause cancer, respiratory illness, and damage to your heart, lungs, and reproductive system. Given these facts, it is not surprising that researchers have determined that living in a place with as much petroleum air pollution as Los Angeles is a lot like living with a smoker.22 Living in Madrid, Spain, is the equivalent of smoking half a pack of cigarettes a day, and a third of Spain’s entire population lives under what they call “the grey beret” of smog.23 The Jakarta, Indonesia, city government simultaneously addressed its problems with tobacco and petroleum smoke by passing a law that banned smoking in public and another that required a smog check of all vehicles. Those bold steps were taken when air quality standards were met on only twenty-eight days in 2005, pushing Jakarta into third place, behind Mexico City, Mexico, and Bangkok, Thailand, as the smoggiest cities in the world.24 Tobacco and petroleum smoke are linked in still another insidious manner. Children born into households of smokers, who are also exposed to air pollution from combustion of motor fuels, have a 7 percent lower birth weight and a 3 percent smaller head circumference than average, conditions that are linked to learning disabilities later in life.25 Evidence for these sobering statistics was found in the blood of umbilical cords, where petroleum-based toxins (including carcinogens such as polycyclic aromatic hydrocarbons, benzene, and toluene) were discovered. Let me repeat that: cancer-causing toxins from oil and tobacco were found in the blood of umbilical cords. “My advice to pregnant women is don’t let people in your household smoke,” warns Dr. Frederica P. Perera, senior author of the study that made this discovery. Perera suggested reducing risk to children by eliminating their exposure to secondhand tobacco smoke because she knew that there is no effective means of elimi-


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM

Page 19

Chapter 1. The Breath of Our Fathers

19

nating exposure to secondhand petroleum smoke.That’s where the similarities between tobacco smoke and smog end. Although tobacco is addicting and quitting is extremely difficult, smokers can stop inhaling toxic tobacco smoke.We can also walk away from secondhand tobacco smoke in many instances. Petroleum pollution, however, is impossible to escape in most modern cities. We are involuntarily subjected to the toxins created by petroleum 100 percent of the time and along every step on the journey of life. What do we know with relative certainty about the effects of petroleum toxins on our health at various points in our lives?

Petroleum Threats from Womb to Tomb In truth, the risks do not wait for us to come into the world, but rather begin their assault in the womb. Looking at thousands of Los Angeles–area expectant mothers, researchers at the University of California at Los Angeles found that those exposed for as little as a month to high levels of smog (mostly ozone and carbon monoxide) were three times more likely to have babies with physical deformities, including cleft lips and palates and defective heart valves, when compared with national averages for birth defects. Perhaps more alarming is that most of these women lived in areas that meet federal standards for ozone and carbon monoxide, standards that are supposed to protect public health.26 As they grow older, our children go to school, and many of them will make the trip in a familiar American icon, the lumbering yellow school bus. Like most states, nearly 70 percent of the 24,372 school buses in California run on diesel fuel, making them some of the dirtiest vehicles on the road.27 Researchers found that a child riding inside a diesel school bus may be exposed to as much as four times the level of toxic diesel exhaust as someone riding in a car ahead of it. These exposures pose up to forty-six times the cancer risk considered significant under federal law.28 “We always thought air pollution had acute effects: If you


Tamminen-02-Ch01 (9-26)

20

7/24/06

11:16 AM

Page 20

LIVES PER GALLON

breathed bad air one day, you felt crappy that night,” said University of Southern California professor James Gauderman. In one of his studies on the health effects of air pollution, completed in 2004, he confirmed years of research that observed a total of more than 5,000 California kids.29 “Long-term exposure, day-in, day-out, in an area like Los Angeles, really appears to have detrimental effects on all kinds of chronic conditions.” Gauderman also discovered that on days when ozone levels are rated “high,” kids who play extensively outdoors are up to four times more likely to have an asthma attack than those who spend more time watching TV inside, where ozone levels are lower. “Kids like me with asthma . . . during ozone days I will almost always have an asthma attack if I go outside,” testified eight-year-old Kyle Damitz in December 2002 at a USEPA hearing on proposed diesel standards that would include reducing emissions from school buses. “On good days, I take 2 pills in the morning and 3 pills at bedtime. I do an IV treatment every 2 weeks. On a bad asthma day, I take 4 pills in the morning, more at lunch, and again, more at bedtime. I came here today to . . . ask you to help. . . . If you make our air cleaner, I will be able to live longer.”30 Cleaner standards for diesel-powered buses and trucks were ultimately adopted by the USEPA, over the objections of the American Petroleum Institute and numerous vehicle and engine manufacturers. Looking at children and adults combined, a 1997 study estimates that smog pollution is responsible for more than 6 million asthma attacks in the United States each year plus 159,000 emergency room visits and 53,000 hospitalizations.These illnesses are not precipitated by outdoor smog exposure alone.31 Does that mean that our children are not safe from petroleum pollution inside the home or at school? “The message is . . . about not planning . . . schools close to a major freeway,” declared Gauderman, discussing research conducted between 1993 and 2004 in Southern California related to asthma


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM

Page 21

Chapter 1. The Breath of Our Fathers

21

incidence and proximity to busy roadways.The research found that children living within a quarter mile of a freeway had an 89 percent higher risk of developing asthma than those who lived a mile away. For adults, the results indicated higher incidence of heart disease and deaths related to respiratory illness.32 Similar results have been found in Vancouver, Denver, and a host of other cities around the world.33 If our children aren’t safe indoors, at home, or at school, can they escape harmful petroleum smoke when they are in the car with the windows rolled up and the air conditioning on? No; in fact, as the school bus research showed, confined areas can concentrate air pollution and increase exposure levels. Research also reveals that levels of benzene, toluene, formaldehyde, and methyl tertiary butyl ether (MTBE), as well as carbon monoxide, were up to ten times higher inside vehicles than at fixed outdoor monitoring stations.34 All these toxins have links to respiratory ailments and cancer. To be sure, not all indoor air pollution is caused by vehicle emissions. Some fumes are emitted by plastics made from petroleum that are used inside the vehicle.Yet because the majority of the toxins are coming from the outdoor air around the vehicle, the simple act of opening a window to get “fresh air” does not improve the quality of the air you breathe. Unfortunately, there is little that drivers can do to prevent this “in-cabin” pollution; switching vents on or off has no perceivable effects, air conditioning does not act as a significant filter, and rolling a window up or down causes no significant changes.35 Does it make a difference if someone does the driving for you? Not according to research from the Imperial College of London released in January 2006 that found that travel by taxi was more than twice as toxic as travel by private automobile. Researchers believe that because taxis spend so much time stuck in traffic, the average passenger is exposed to far more pollution, especially particulates.36 “We would encourage people to walk [to reduce exposure,] and the exercise is good for you,” said Dr. Richard Russell of the British


Tamminen-02-Ch01 (9-26)

22

7/24/06

11:16 AM

Page 22

LIVES PER GALLON

Lung Foundation, who noted that patients who traveled to his clinic by taxi showed higher levels of carbon monoxide and other petroleum air pollutants in their blood samples than those who arrived by other means of transport.37 As we grow from children to adults, we face more than the struggle for breath caused by asthma or other respiratory diseases. Our inescapable lifetime exposure to secondhand petroleum smoke may now add up to life-threatening or life-shortening illnesses. “Of particular concern is human epidemiological evidence linking diesel exhaust to an increased risk of lung cancer,” says the USEPA.38 Of all the threats from petroleum products and their use, diesel exhaust may be the most deadly. In a comprehensive 1998 report on the health effects of diesel exhaust, the Natural Resources Defense Council (NRDC) explained that diesel exhaust contains hundreds of constituent chemicals, dozens of which are recognized human toxicants, carcinogens, reproductive hazards, or endocrine disruptors. NRDC’s research also noted that diesel engines spew out a hundred times more sooty particles than do gasoline engines.39 These particles, as small as onetenth of a micron, are so minuscule that they bypass the lung tissue and bloodstream, attacking the very cellular structure of the body and causing a wide variety of additional illnesses, from tissue inflammation to cancer.40 As many as 8,800 people die each year in Southern California’s South Coast air basin as a result of exposure to this type of particulate air pollution, about four times the number of people killed in automobile accidents each year in the same region.41 The news is no better in other parts of the United States. Diesel exhaust alone is responsible for causing more than 125,000 cancer cases in the United States each year, and up to 100,000 Americans will die each year from causes attributable to completely preventable smog.42 Nor, as the evidence shows, is particulate matter the sole cause of cancer from petroleum air pollution.


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM

Page 23

Chapter 1. The Breath of Our Fathers

23

“Across the state, cancer risk is driven greatly by benzene, and a large source of benzene emissions is automobiles,” said Paul Dubenetzky, assistant commissioner for the Indiana Department of Environmental Management’s Office of Air Quality, speaking in 2006 after the USEPA revealed that more than 25 percent of the cancer risk in northeast Indiana comes from benzene. The report also noted that the largest contributor of benzene to air pollution is vehicle exhaust, yet USEPA’s new proposal to slash those emissions will not be fully implemented until 2030.43 Secondhand petroleum smoke is inescapable when we breathe polluted air, but it insinuates its toxins into our bodies in one more unexpected manner. Plants are excellent “scavengers” of pollutants from the air and soil, especially the volatile toxic by-products from burning petroleum.44 Because of higher petroleum pollution in urban areas, this linkage becomes especially important for foods that are grown in the backyard gardens of smoggy cities. Some 40 percent of U.S. households grow some of their food in a home garden, and approximately one million customers visit local farmers markets each week, buying food grown close to sources of urban pollution. In many cases, we are thereby eating a variety of the 225 toxic substances found in petroleum products.

Lord, What Fools These Mortals Be: Petroleum Pollutes More Than Air In A Midsummer Night’s Dream, Shakespeare’s sprite Puck finds human behavior quite foolish and hard to understand. When you consider the many things in our lives that are harmed by petroleum pollution, you may agree. “Already Independence Hall in Philadelphia, where the Declaration of Independence was signed, is experiencing damage [from air pollution],” explains a 1990 report by the World Watch Institute. “At the Gettysburg Civil War battlefield, every statue or tablet made of bronze, limestone, or sandstone is being slowly, but inexorably,


Tamminen-02-Ch01 (9-26)

24

7/24/06

11:16 AM

Page 24

LIVES PER GALLON

eaten away. Both the Statue of Liberty, made of copper, and the Washington Monument, a granite obelisk faced with marble, are also threatened.�45 Natural monuments are also affected by air pollution. As a recall of the governor set California politics on fire in October 2003, literal firestorms raced through Southern California, essentially out of control until early rains mercifully gave weary firefighters a reprieve. Why were these fires so epic in scope and so far beyond human control compared with those of previous decades? In a word, ozone, from vehicle exhaust. Ozone damage to pine trees in the national forests of the San Bernardino Mountains of Southern California has been well understood since at least 1950.46 Plants use carbohydrates for growth, for maintenance, and for their immune systems. Ozone reduces a plant’s ability to produce carbohydrates and triggers its consumption of the carbohydrates it has stored to repair damage caused by that same ozone in the first place. Over a period of years, carbohydrate levels are gradually depleted to the point at which little is available for dealing with exceptional stresses such as drought and insect infestations. This ozone damage then accelerates needle drop in pine trees, triggering a kind of premature aging and causing trees to shed larger amounts of needles than normal. This sequence of events not only weakened the San Bernardino trees, it also contributed to higher fuel loads on the ground. Bark beetles finished off the vulnerable trees, turning them into standing matchsticks. With this fuel already on the ground, once the fires started, they were virtually impossible to put out. Lives were lost and hundreds of homes were incinerated as entire communities in the foothills and mountains turned to ashes. If we mortals are fools, as Puck would have us believe, then perhaps the most inexplicable behavior of all is that we know the harms


Tamminen-02-Ch01 (9-26)

7/24/06

11:16 AM

Page 25

Chapter 1. The Breath of Our Fathers

25

of petroleum pollution, yet we allow the problem to continue, in many regions worse than ever before.Why? One reason is that science has been slow to reach human health conclusions about smog exposure because some cancers and other diseases are slow to develop and therefore take many years to study for conclusive results. For example, recently published results showing that deaths related to petroleum air pollution may be as much as two times higher than previously thought were based on a study that began more than twenty years earlier, in 1982.47 Another reason for the lack of effective regulations is that our understanding of the health harms from air pollution is a moving target. “We [now] know there are serious health effects from low levels of air pollution,� says Aaron Cohen of the Health Effects Institute, a joint project of the USEPA and various polluting industries.48 Cohen was speaking of the steady drumbeat of new studies showing that even tiny amounts of some pollutants, previously assumed to be benign, can cause significant health harms, especially in children, the elderly, and those living in areas with disproportionately high rates of pollution. For example, we have not set stringent standards for manganese in gasoline because it was thought to be relatively harmless. Studies in recent years, however, have discovered that manganese in gasoline may affect the brain, lungs, and testes and can result in a severe degenerative condition similar to Parkinson’s disease.49 Once such problems are diagnosed, regulators struggle to catch up. The last few residents of Rapa Nui must also have struggled vainly to catch up and trap the last rat or bird for food and to find a way to maintain their lives in what remained of a paradise lost. Their experience warns us that civilizations can falter when people fail to heed the evidence of their effect on the environment that sustains their existence. Those ill-fated islanders might have changed their fate by reallocating crucial resources, but as we have known for


Tamminen-02-Ch01 (9-26)

26

7/24/06

11:16 AM

Page 26

LIVES PER GALLON

decades, the assault of petroleum on our health, at every age of our lives, is inescapable. Or is it? Where are we most at risk of inhaling or ingesting petroleum toxins? Is there any way to avoid the worst exposure? Is there any hope of escape?


Poisoned for Pennies The Economics of Toxics and Precaution Frank Ackerman

Contents

Paperback $35.00 ISBN: 9781597264013

Acknowledgments Introduction Chapter 1: Pricing the Priceless Chapter 2: Was Environmental Protection Ever a Good Idea? Chapter 3: The Unbearable Lightness of Regulatory Costs Chapter 4: Precaution, Uncertainty, and Dioxin Chapter 5: The Economics of Atrazine Chapter 6: Ignoring the Benefits of Pesticides Regulation Chapter 7: Mad Cows and Computer Models Chapter 8: Costs of Preventable Childhood Illness Chapter 9: Phasing Out a Problem Plastic Chapter 10: The Costs of REACH Chapter 11: Impacts of REACH on Developing Countries Chapter 12: How Should the United States Respond to REACH? Conclusion: Economics and Precautionary Policies Appendix A: Outline of the Arrow-Hurwicz Analysis Appendix B: The Fawcett Report on Atrazine Research Appendix C: How Not to Analyze REACH: the Arthur D. Little Model Appendix D: U.S. Impacts of REACH: Methodology and Data Notes Bibliography Index


ip-ackerman2.000-000:Layout 1

4/18/08

9:36 AM

Page 1

Chapter 1

Pricing the Priceless How strictly should we regulate arsenic in drinking water? Or carbon dioxide in the atmosphere? Or pesticides in our food? Or oil drilling in scenic places? The list of environmental harms and potential regulatory remedies often appears to be endless. Is there an objective way to decide how to proceed? Cost-benefit analysis promises to provide the solution. The sad fact is that cost-benefit analysis is fundamentally unable to fulfill this promise. Many approaches to setting environmental standards require some consideration of costs and benefits. Even technology-based regulation, maligned by cost-benefit enthusiasts as the worst form of regulatory excess, typically entails consideration of economic costs. Cost-benefit analysis differs, however, from other analytical approaches by demanding that the advantages and disadvantages of a regulatory policy be reduced, as far as possible, to numbers, and then further reduced to dollars and cents. In this feature of cost-benefit analysis lies its doom. Indeed, looking closely at the products of this pricing scheme makes it seem not only a little cold, but a little crazy as well. Consider the following examples, which we are not making up. They are not the work of a lunatic fringe; on the contrary, they reflect the work products of some of the most influential and reputable of today’s costbenefit practitioners. We are not sure whether to laugh or cry, but we find it impossible to treat these studies as serious contributions to a rational discussion. Chapter 1 by Frank Ackerman and Lisa Heinzerling.

1


ip-ackerman2.000-000:Layout 1

4/18/08

9:36 AM

Page 2

Poisoned for Pennies

Several years ago, states were in the middle of their litigation against tobacco companies, seeking to recoup the medical expenditures they had incurred as a result of smoking. At that time, W. Kip Viscusi—for many years a professor of law and economics at Harvard—undertook research concluding that states, in fact, saved money as the result of smoking by their citizens. Why? Because they died early! They thus saved their states the trouble and expense of providing nursing home care and other services associated with an aging population. Viscusi didn’t stop there. So great, under Viscusi’s assumptions, were the financial benefits to the states of their citizens’ premature deaths that, he suggested, “cigarette smoking should be subsidized rather than taxed.”1 Amazingly, this cynical conclusion has not been swept into the dustbin where it belongs, but instead has been revived: the tobacco company Philip Morris commissioned the well-known consulting group Arthur D. Little to examine the financial benefits to the Czech Republic of smoking among Czech citizens. Arthur D. Little found that smoking was a financial boon for the government—partly because, again, it caused citizens to die earlier and thus reduced government expenditure on pensions, housing, and health care.2 This conclusion relies, so far as we can determine, on perfectly conventional cost-benefit analysis. There is more. In recent years, much has been learned about the special risks children face from pesticides in their food, contaminants in their drinking water, ozone in the air, and so on. Because cost-benefit analysis has become much more prominent at the same time, there is now a budding industry in valuing children’s health. Its products are often bizarre. Take the problem of lead poisoning in children. One of the most serious and disturbing effects of lead is the neurological damage it can cause in young children, including permanently lowered mental ability. Putting a dollar value on the (avoidable, environmentally caused) intellectual impairment of children is a daunting task, but economic analysts have not been daunted. Randall Lutter, who became a top official at the U.S. Food and Drug Administration in 2003, has been a long-time regulatory critic. In an earlier position at the AEI-Brookings Joint Center for Regulatory Studies, he argued that the way to value the damage lead causes in children is to look at how much parents of affected children spend on chelation therapy, a chemical treatment that is supposed to cause excretion of lead from 2


ip-ackerman2.000-000:Layout 1

4/18/08

9:36 AM

Page 3

Pricing the Priceless

the body. Parental spending on chelation supports an estimated valuation of only about $1,500 per IQ point lost due to lead poisoning. Previous economic analyses by the U.S. Environmental Protection Agency (EPA), based on the children’s loss of expected future earnings, have estimated the value to be much higher—up to $9,000 per IQ point. Based on his lower figure, Lutter claimed to have discovered that too much effort is going into controlling lead: “Hazard standards that protect children far more than their parents think is appropriate may make little sense. The agencies should consider relaxing their lead standards.”3 In fact, Lutter presented no evidence about what parents think, only about what they spend on one rare variety of private medical treatments (which, as it turns out, has not been proven medically effective for chronic, low-level lead poisoning). Why should environmental standards be based on what individuals are now spending on desperate personal efforts to overcome social problems? For sheer analytical audacity, Lutter’s study faces some stiff competition from another study concerning kids—this one concerning the value, not of children’s health, but of their lives (more precisely, the researchers talk about the value of “statistical lives,” a concept addressed later in this chapter). In this second study, researchers examined mothers’ car-seat fastening practices.4 They calculated the difference between the time required to fasten the seats correctly and the time mothers actually spent fastening their children into their seats. Then they assigned a monetary value to this interval of time based on the mothers’ hourly wage rate (or, in the case of nonworking moms, based on a guess at the wages they might have earned). When mothers saved time—and, by hypothesis, money— by fastening their children’s car seats incorrectly, they were, according to the researchers, implicitly placing a finite monetary value on the lifethreatening risks to their children posed by car accidents. Building on this calculation, the researchers were able to answer the vexing question of how much a statistical child’s life is worth to its mother. (As the mother of a statistical child, she is naturally adept at complex calculations comparing the value of saving a few seconds versus the slightly increased risk to her child.) The answer parallels Lutter’s finding that we are valuing our children too highly: in car-seat-land, a child’s life is worth only $500,000. The absurdity of these particular analyses, though striking, is not 3


ip-ackerman2.000-000:Layout 1

4/18/08

9:36 AM

Page 4

Poisoned for Pennies

unique to them. Indeed, we will argue, cost-benefit analysis is so inherently flawed that if one scratches the apparently benign surface of any of its products, one finds the same kind of absurdity. But before launching into this critique, it will be useful first to establish exactly what costbenefit analysis is, and why one might think it is a good idea.

Dollars and Discounting Cost-benefit analysis tries to mimic a basic function of markets by setting an economic standard for measuring the success of the government’s projects and programs. That is, cost-benefit analysis seeks to perform, for public policy, a calculation that happens routinely in the private sector. In evaluating a proposed new initiative, how do we know if it is worth doing or not? The answer is much simpler in business than in government. Private businesses, striving to make money, only produce things that they believe someone is willing to pay for. That is, firms only produce things for which the benefits to consumers, measured by consumers’ willingness to pay for them, are expected to be greater than the costs of production. It is technologically possible to produce men’s business suits in brightly colored polka dots. Successful producers suspect that few people are willing to pay for such products, and usually stick to at most minor variations on suits in somber, traditional hues. If some firm did happen to produce a polka-dot business suit, no one would be forced to buy it; the producer would bear the entire loss resulting from the mistaken decision. Government, in the view of many critics, is in constant danger of drifting toward producing polka-dot suits—and making people pay for them. Policies, regulations, and public spending do not face the test of the marketplace; there are no consumers who can withhold their dollars from the government until it produces the regulatory equivalent of navy blue and charcoal gray suits. There is no single quantitative objective for the public sector comparable to profit maximization for businesses. Even with the best of intentions, critics suggest, government programs can easily go astray for lack of an objective standard by which to judge whether or not they are meeting citizens’ needs. Cost-benefit analysis sets out to do for government what the market does for business: add up the benefits of a public policy and compare them to the costs. The two sides of the ledger raise very different issues. 4


ip-ackerman2.000-000:Layout 1

4/18/08

9:36 AM

Page 5

Pricing the Priceless

Estimating Costs The first step in a cost-benefit analysis is to calculate the costs of a public policy. For example, the government may require a certain kind of pollution control equipment, which businesses must pay for. Even if a regulation only sets a ceiling on emissions, it results in costs that can be at least roughly estimated through research into available technologies and business strategies for compliance. The costs of protecting human health and the environment through the use of pollution control devices and other approaches are, by their very nature, measured in dollars. Thus, at least in theory, the cost side of cost-benefit analysis is relatively straightforward. (In practice, as we shall see, it is not quite that simple.) The consideration of the costs of environmental protection is not unique to cost-benefit analysis. Development of environmental regulations has almost always involved consideration of economic costs, with or without formal cost-benefit techniques. What is unique to costbenefit analysis, and far more problematic, is the other side of the balance, the monetary valuation of the benefits of life, health, and nature itself.

Monetizing Benefits Since there are no natural prices for a healthy environment, cost-benefit analysis requires the creation of artificial ones. This is the hardest part of the process. Economists create artificial prices for health and environmental benefits by studying what people would be willing to pay for them. One popular method, called “contingent valuation,” is essentially a form of opinion poll. Researchers ask a cross-section of the affected population how much they would be willing to pay to preserve or protect something that can’t be bought in a store. Many surveys of this sort have been done, producing prices for things that appear to be priceless. For example, the average American household is supposedly willing to pay $257 to prevent the extinction of bald eagles, $208 to protect humpback whales, and $80 to protect gray wolves.5 These numbers are quite large: since there are more than 100 million households in the country, the nation’s total willingness to pay for the preservation of bald eagles alone is ostensibly more than $25 billion. An alternative method of attaching prices to unpriced things infers 5


ip-ackerman2.000-000:Layout 1

4/18/08

9:36 AM

Page 6

Poisoned for Pennies

what people are willing to pay from observation of their behavior in other markets. To assign a dollar value to risks to human life, for example, economists usually calculate the extra wage—or “wage premium”—that is paid to workers who accept more risky jobs. Suppose that two jobs are comparable, except that one is more dangerous and better paid. If workers understand the risk and voluntarily accept the more dangerous job, then they are implicitly setting a price on risk by accepting the increased risk of death in exchange for increased wages. What does this indirect inference about wages say about the value of a life? A common estimate in cost-benefit analyses of the late 1990s was that avoiding a risk that would lead, on average, to one death is worth roughly $6.3 million.6 This number is of great importance in costbenefit analyses because avoided deaths are the most thoroughly studied benefits of environmental regulations.

Discounting the Future One more step requires explanation to complete this quick sketch of costbenefit analysis. Costs and benefits of a policy frequently occur at different times. Often, costs are incurred today, or in the near future, to prevent harm in the more remote future. When the analysis spans a number of years, future costs and benefits are discounted, or treated as equivalent to smaller amounts of money in today’s dollars. Discounting is a procedure developed by economists to evaluate investments that produce future income. The case for discounting begins with the observation that $100, say, received today is worth more than $100 received next year, even in the absence of inflation. For one thing, you could put your money in the bank today and earn a little interest by next year. Suppose that your bank account earns 3 percent interest. In that case, if you received the $100 today rather than next year, and immediately deposited it in the bank, you would earn $3 in interest, giving you a total of $103 next year. Likewise, to get $100 next year you need to deposit only $97 today. So, at a 3 percent discount rate, economists would say that $100 next year has a present value of $97 in today’s dollars. For longer periods of time, the effect is magnified: at a 3 percent discount rate, $100 twenty years from now has a present value of only $55. The larger the discount rate, the smaller the present value: at a 5 percent 6


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 7

Pricing the Priceless

discount rate, for example, $100 twenty years from now has a present value of only $38. Cost-benefit analysis routinely uses the present value of future benefits. That is, it compares current costs, not to the actual dollar value of future benefits, but to the smaller amount you would have to put into a hypothetical savings account today to obtain those benefits in the future. This application of discounting is essential, and indeed commonplace, for many practical financial decisions. If offered a choice of investment opportunities with payoffs at different times in the future, you can (and should) discount the future payoffs to the present to compare them to each other. The important issue for environmental policy, as we shall see, is whether this logic also applies to outcomes far in the future, and to opportunities, such as long life and good health, that are not naturally stated in dollar terms.

The Case for Cost-Benefit Before describing the problems with cost-benefit analysis, it will be useful to set forth the arguments in favor of this type of analysis. Many different arguments for cost-benefit analysis have been offered over the years. Most of the arguments fall into one of two broad categories. First, there are economic assertions that better results can be achieved with costbenefit analysis. Second, there are legal and political claims that a more objective and more open government process can emerge through this kind of analysis.

Better Results Economics frequently focuses on increasing efficiency—on getting the most desirable results from the least resources. How do we know that greater regulatory efficiency is needed? For many economists, this is an article of faith: greater efficiency is always a top priority, in regulation or elsewhere. Cost-benefit analysis is thought by its supporters to further efficiency by ensuring that regulations are adopted only when benefits exceed costs, and by helping direct regulators’ attention to those problems for which regulatory intervention will yield the greatest net benefits. But many advocates also raise a more specific argument, imbued with 7


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 8

Poisoned for Pennies

a greater sense of urgency. The government, it is said, often issues rules that are insanely expensive, out of all proportion to their benefits—a problem that could be solved by the use of cost-benefit analysis to screen proposed regulations. Thus, much of the case for cost-benefit analysis depends on the case against current regulation. Scarcely a congressional hearing on environmental policy occurs in which fantastic estimates of the costs of federal regulations do not figure prominently. Economists routinely cite such estimates as proof of the need for more economic analysis. Browse the Web sites of any of a variety of think tanks, and you will find numerous references to the extravagant costs of regulation. The estimates typically bandied about are astonishingly high: according to several widely circulated studies, we are often spending hundreds of millions, and sometimes billions, of dollars for every single human life, or even year of life, we save through regulation.7 One widely cited study claims that the cost of life-saving interventions reaches as high as $99 billion for every life-year saved.8 Numbers like these would suggest that current regulatory costs are not only chaotically variable but also unacceptably high. They have even been interpreted to mean that the existing regulatory system actually kills people by imposing some very costly lifesaving requirements while other, less expensive and more effective lifesaving possibilities remain untouched. Indeed, one study concluded that we could save as many as 60,000 more lives every year with no increase in costs, if we simply spent our money on the least rather than most expensive opportunities for saving lives.9 Relying on this research (of which he was a coauthor), John Graham, a top official in the U.S. Office of Management and Budget from 2001 to 2006 and a prominent proponent of cost-benefit analysis, has called the existing state of affairs “statistical murder.�10 From this perspective, cost-benefit analysis emerges as both a moneysaver and a life-saver. By subjecting regulations to a cost-benefit test, we would not only stop spending hundreds of millions or billions of dollars to save a single life, we could also take that money and spend it on saving even more lives through different life-saving measures. That, at least, is the theory. We will argue in the following sections that there are good reasons to question both the theory and the facts it rests on. Nevertheless, the notion that the current system produces crazy, even deadly, rules, and that better economic analysis would avert this ter8


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 9

Pricing the Priceless

rible result, remains one of the most persistent arguments offered on behalf of cost-benefit analysis.

Objectivity and Transparency A second important set of arguments holds that cost-benefit analysis would produce a better regulatory process—more objective and more transparent, and thus more accountable to the public. The holy grail of administrative law is agency decision making based on objective standards. The idea is to prevent an agency either from just doing anything it wants or, more invidiously, from benefiting politically favored groups through its decisions. Cost-benefit analysis has been offered as a means of constraining agency discretion in this way. Another important goal said to be promoted by cost-benefit analysis is transparency of administrative procedures. Decisions about environmental protection are notoriously complex. They reflect the input of biologists, toxicologists, epidemiologists, economists, engineers, lawyers, and other experts whose work is complicated and arcane. The technical details of these decisions often conceal crucial judgments about how much scientific uncertainty is too much, which human populations should be protected from illness and even death, and how important the future is relative to the present. For the public to be part of the process of decision making about the environment, these judgments must be offered and debated in language accessible to people who are not biologists, toxicologists, or other kinds of experts. Many advocates of cost-benefit analysis believe that their methodology provides such a language. They also assert that costbenefit analysis renders decision making transparent insofar as it requires decision makers to reveal all of the assumptions and uncertainties reflected in their decisions.

Fundamental Flaws As we have seen, cost-benefit analysis involves the creation of artificial markets for things, such as good health, long life, and clean air, that are not bought and sold. It also involves the devaluation of future events through discounting. 9


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 10

Poisoned for Pennies

So described, the mind-set of the cost-benefit analyst is likely to seem quite foreign. The translation of all good things into dollars and the devaluation of the future are inconsistent with the way many people view the world. Most of us believe that money doesn’t buy happiness. Most religions tell us that every human life is sacred; it is obviously illegal, as well as immoral, to buy and sell human lives. Most parents tell their children to eat their vegetables and do their homework, even though the rewards of these onerous activities lie far in the future. Monetizing human lives and discounting future benefits seem at odds with these common perspectives. The cost-benefit approach also is inconsistent with the way many of us make daily decisions. Imagine performing a new cost-benefit analysis to decide whether to get up and go to work every morning, whether to exercise or eat right on any given day, whether to wash the dishes or leave them in the sink, and so on. Inaction would win far too often—and an absurd amount of effort would be spent on analysis. Most people have long-run goals, commitments, and habits that make such daily balancing exercises either redundant or counterproductive. The same might be true of society as a whole as it undertakes individual steps in the pursuit of any goal, set for the long haul, that cannot be reached overnight, including, for example, the achievement of a clean environment. Moving beyond these intuitive responses, we offer in this section a detailed explanation of why cost-benefit analysis of environmental protection fails to live up to the hopes and claims of its advocates. There is no quick fix, because these failures are intrinsic to the methodology, appearing whenever it is applied to any complex environmental problem. In our view, cost-benefit analysis suffers from four fundamental flaws, addressed in the next four subsections: 1. The standard economic approaches to valuation are inaccurate and implausible. 2. The use of discounting improperly trivializes future harms and the irreversibility of some environmental problems. 3. The reliance on aggregate, monetized benefits excludes questions of fairness and morality. 4. The value-laden and complex cost-benefit process is neither objective nor transparent. 10


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 11

Pricing the Priceless

1. Dollars without Sense Recall that cost-benefit analysis requires the creation of artificial prices for all relevant health and environmental impacts. To weigh the benefits of regulation against the costs, we need to know the monetary value of preventing the extinction of species, preserving many different ecosystems, avoiding all manner of serious health impacts, and even saving human lives. Without such numbers, cost-benefit analysis cannot be conducted. Artificial prices have been estimated for many, though by no means all, benefits of regulation. As discussed, preventing the extinction of bald eagles reportedly goes for somewhat more than $250 per household. Preventing intellectual impairment caused by childhood lead poisoning comes in at about $9,000 per lost IQ point in the standard view, or a mere $1,500 per point in Lutter’s alternative analysis. Saving a life is ostensibly worth $6.3 million. This quantitative precision, achieved through a variety of indirect techniques for valuation, comes at the expense of accuracy and even common sense. Though problems arise in many areas of valuation, we will focus primarily on the efforts to attach a monetary value to human life, both because of its importance in cost-benefit analysis and because of its glaring contradictions. There Are No “Statistical” People

What can it mean to say that saving one life is worth $6.3 million? Human life is the ultimate example of a value that is not a commodity, and does not have a price. You cannot buy the right to kill someone for $6.3 million, nor for any other price. Most systems of ethical and religious belief maintain that every life is sacred. If analysts calculated the value of life itself by asking people what it is worth to them (the most common method of valuation of other environmental benefits), the answer would be infinite, as “no finite amount of money could compensate a person for the loss of his life, simply because money is no good to him when he is dead.”11 The standard response is that a value such as $6.3 million is not actually a price on an individual’s life or death. Rather, it is a way of expressing the value of small risks of death; for example, it is one million times 11


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 12

Poisoned for Pennies

the value of a one in a million risk. If people are willing to pay $6.30 to avoid a one in a million increase in the risk of death, then the “value of a statistical life” is $6.3 million. Unfortunately, this explanation fails to resolve the dilemma. It is true that risk (or “statistical life”) and life itself are distinct concepts. In practice, however, analysts often ignore the distinction between valuing risk and valuing life.12 Many regulations reduce risk for a large number of people, and avoid actual death for a much smaller number. A complete costbenefit analysis should, therefore, include valuation of both of these benefits. However, the standard practice is to calculate a value only for “statistical” life and to ignore life itself. The confusion between the valuation of risk and the valuation of life itself is embedded in current regulatory practice in another way as well. The U.S. Office of Management and Budget, which reviews cost-benefit analyses prepared by federal agencies pursuant to Executive Order, instructs agencies to discount the benefits of life-saving regulations from the moment of avoided death, rather than from the time when the risk of death is reduced.13 This approach to discounting is plainly inconsistent with the claim that cost-benefit analysis seeks to evaluate risk. When a life-threatening disease, such as cancer, has a long latency period, many years may pass between the time when a risk is imposed and the time of death. If monetary valuations of statistical life represented risk, and not life, then the value of statistical life would be discounted from the date of a change in risk (typically, when a new regulation is enforced) rather than from the much later date of avoided actual death.14 In acknowledging the monetary value of reducing risk, economic analysts have contributed to our growing awareness that life-threatening risk itself—and not just the end result of such risk, death—is an injury. But they have blurred the line between risks and actual deaths by calculating the value of reduced risk while pretending that they have produced a valuation of life itself. The paradox of monetizing the infinite or immeasurable value of human life has not been resolved; it has only been glossed over. People Care about Other People

Another large problem with the standard approach to valuation of life is that it asks individuals (either directly through surveys, or indirectly 12


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 13

Pricing the Priceless

through observing wage and job choices) only about their attitudes toward risks to themselves. A recurring theme in literature suggests that our deepest and noblest sentiments involve valuing someone else’s life more highly than our own: think of parents’ devotion to their children, soldiers’ commitment to those whom they are protecting, lovers’ concern for each other. Most spiritual beliefs call on us to value the lives of others—not only those closest to us, but also those whom we have never met. This point echoes a procedure that has become familiar in other areas of environmental valuation. Economists often ask about existence values: how much is the existence of a wilderness area or an endangered species worth to you, even if you will never personally experience it? If this question makes sense for bald eagles and national parks, it must be at least as important when applied to safe drinking water and working conditions for people we don’t know. What is the existence value of a person you will never meet? How much is it worth to you to prevent a death far away? The answer cannot be deduced solely from your attitudes toward risks to yourself. We are not aware of any attempts to quantify the existence value of the life of a stranger, let alone a relative or a friend, but we are sure that most belief systems affirm that this value is substantial (assuming, of course, that the value of life is a number in the first place). Voting Is Different from Buying

Cost-benefit analysis, which relies on estimates of individuals’ preferences as consumers, also fails to address the collective choice presented to society by most public health and environmental problems. Valuation of environmental benefits is based on individuals’ private decisions as consumers or workers, not on their public values as citizens. However, policies that protect the environment are often public goods, and are not available for purchase in individual portions. In a classic example of this distinction, the philosopher Mark Sagoff found that his students, in their role as citizens, opposed commercial ski development in a nearby wilderness area, but, in their role as consumers, would plan to go skiing there if the development was built.15 There is no contradiction between these two views: as individual consumers, the students would have no way to express their collective preference for wilderness preservation. 13


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 14

Poisoned for Pennies

Their individual willingness to pay for skiing would send a misleading signal about their views as citizens. It is often impossible to arrive at a meaningful social valuation by adding up the willingness to pay expressed by individuals. What could it mean to ask how much you personally are willing to pay to clean up a major oil spill? If no one else contributes, the cleanup won’t happen regardless of your decision. As the Nobel Prize–winning economist Amartya Sen has pointed out, if your willingness to pay for a large-scale public initiative is independent of what others are paying, then you probably have not understood the nature of the problem.16 Instead, a collective decision about collective resources is required. In a similar vein, the philosopher Henry Richardson argued that reliance on the cost-benefit standard forecloses the process of democratic deliberation that is necessary for intelligent decision making. In his view, attempts to make decisions based on monetary valuation of benefits freeze preferences in advance, leaving no room for the changes in response to new information, rethinking of the issues, and negotiated compromises that lie at the heart of the deliberative process.17 Cost-benefit analysis turns public citizens into selfish consumers, and interconnected communities into atomized individuals. In this way, it distorts the question it sets out to answer: how much do we, as a society, value health and the environment? Numbers Don’t Tell Us Everything

A few simple examples illustrate that numerically equal risks are not always equally deserving of regulatory response. The death rate is roughly the same, somewhat less than one in a million, for a day of downhill skiing, for a day of working in the construction industry, or for drinking about twenty liters of water (about the amount that a typical adult drinks in two weeks)18 containing fifty parts per billion of arsenic, the regulatory limit that was in effect until 2001.19 This does not mean that society’s responsibility to reduce risks is the same in each case. Most people view risks imposed by others, without an individual’s consent, as more worthy of government intervention than risks that an individual knowingly accepts. On that basis, the highest priority among our three examples is to reduce drinking water contamination, a hazard

14


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 15

Pricing the Priceless

to which no one has consented. The acceptance of a risky occupation such as construction is at best quasi-voluntary—it involves somewhat more individual discretion than the “choice” of public drinking water supplies, but many people go to work under great economic pressure, and with little information about occupational hazards. In contrast, the choice of risky recreational pursuits such as skiing is entirely discretionary; obviously no one is forced to ski. Safety regulation in construction work is thus more urgent than regulation of skiing, despite the equality of numerical risk. In short, even for ultimate values such as life and death, the social context is decisive in our evaluation of risks. Cost-benefit analysis assumes the existence of generic, acontextual risk, and thereby ignores the contextual information that determines how many people, in practice, think about real risks to real people. Artificial Prices Are Expensive

Finally, the economic valuation called for by cost-benefit analysis is fundamentally flawed because it demands an enormous volume of consistently updated information, which is beyond the practical capacity of our society to generate. All attempts at valuation of the environment begin with a problem: the goal is to assign monetary prices to things that have no prices, because they are not for sale. One of the great strengths of the market is that it provides so much information about real prices. For any commodity that is actually bought and sold, prices are communicated automatically, almost costlessly, and with constant updates as needed. To create artificial prices for environmental values, economists have to find some way to mimic the operation of the market. Unfortunately the process is far from automatic, it is certainly not costless, and it has to be repeated every time an updated price is needed. As a result, there is constant pressure to use outdated or inappropriate valuations. Indeed, there are sound economic reasons for doing so: no one can afford constant updates, and significant savings can be achieved by using valuations created for other cases. In 2000, in EPA’s cost-benefit analysis of standards for arsenic in drinking water, a valuation estimated for a case of chronic bronchitis, from a study performed more than ten years earlier, was used to represent the value of a nonfatal case of bladder cancer.

15


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 16

Poisoned for Pennies

This is not, we hope and believe, because anyone thinks that bronchitis and bladder cancer are the same disease. The reason is more mundane: no one had performed an analysis of the cost of bladder cancer, and even the extensive analysis of arsenic regulations did not include enough time and money to do so. Therefore, the investigators used an estimated value for a very different disease. The only explanation offered for this procedure was that it had been done before, and the investigators thought nothing better was available. Use of the bronchitis valuation to represent bladder cancer can charitably be described as grasping at straws. Lacking the time and money to fill in the blank carefully, the economists simply picked a number. This is not remotely close to the level of rigor that is seen throughout the natural science, engineering, and public health portions of EPA’s arsenic analysis. Yet it will happen again, for exactly the same reason. It is not a failure of will or intellect, but rather the inescapable limitations of time and budget, that lead to reliance on dated, inappropriate, and incomplete information to fill in the gaps on the benefit side of a cost-benefit analysis. Summing Up

There is, in short, a host of problems with the process of valuation. On a philosophical level, human life may belong in the category of things that are too valuable to buy and sell. Most ethical and religious beliefs place the protection of human life in the same category as love, family, religion, democracy, and other ultimate values, which are not and cannot be priced. It is a biased and misleading premise to assume that individuals’ willingness to pay to avoid certain risks can be aggregated to arrive at a figure for what society should pay to protect human life. Risk of death is not the same as death itself, and not all risks can reasonably be compared one to the other. Moreover, the value to society of protecting human life cannot be arrived at simply by toting up individual consumer preferences. The same kinds of problems affect other valuation issues raised by cost-benefit analysis, such as estimating the value of clean water, biodiversity, or entire ecosystems. The upshot is that cost-benefit analysis is fundamentally incapable of delivering on its promise of more economically efficient decisions about protecting human life, health, and the environment. Absent a credible monetary metric for calculating the benefits of regulation, cost-benefit analysis is inherently unreliable. 16


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 17

Pricing the Priceless

2. Trivializing the Future One of the great triumphs of environmental law is its focus on the future: it seeks to avert harms to people and to natural resources in the future, and not only within this generation, but within future generations as well. Indeed, one of the primary objectives of the National Environmental Policy Act, which has been called our basic charter of environmental protection, is to nudge the nation into “fulfill[ing] the responsibilities of each generation as trustee of the environment for succeeding generations.”20 Protection of endangered species and ecosystems, reduction of pollution from persistent chemicals such as dioxin and DDT, prevention of long-latency diseases such as cancer, protection of the unborn against the health hazards from exposure to toxins in the womb—all of these protections are afforded by environmental law, and all of them look to the future as well as to the present. Environmental law seeks, moreover, to avoid the unpleasant surprises that come with discontinuities and irreversibility—the kinds of events that outstrip our powers of quantitative prediction. Here, too, environmental law tries to protect the future in addition to the present. Cost-benefit analysis systematically downgrades the importance of the future in two ways: through the technique of discounting, and through predictive methodologies that take inadequate account of the possibility of catastrophic and irreversible events. The most common, and commonsense, argument in favor of discounting future human lives saved, illnesses averted, and ecological disasters prevented, is that it is better to suffer a harm later rather than sooner. What’s wrong with this argument? A lot, as it turns out. Do Future Generations Count?

The first problem with the later-is-better argument for discounting is that it assumes that one person is deciding between dying or falling ill now, or dying or falling ill later. In that case, virtually everyone would prefer later. But many environmental programs protect the far future, beyond the lifetime of today’s decision makers. The time periods involved span many decades for a wide range of programs, and even many centuries, in the case of climate change, radioactive waste, and other persistent toxins. With time spans this long, discounting at any positive rate will make even global 17


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 18

Poisoned for Pennies

catastrophes seem trivial. At a discount rate of 5 percent, for example, the death of a billion people 500 years from now becomes less serious than the death of one person today. The choice implicit in discounting is between preventing harms to the current generation and preventing similar harms to future generations. Seen in this way, discounting looks like a fancy justification for foisting our problems off onto the people who come after us. Does Haste Prevent Waste?

The justification of discounting often assumes that environmental problems won’t get any worse if we wait to address them. In the market paradigm, buying environmental protection is just like buying any other commodity. You can buy a new computer now or later—and if you don’t need it this year, you should probably wait. The technology will undoubtedly keep improving, so next year’s models will do more yet cost less. An exactly parallel argument has been made about climate change (and other environmental problems) by some economists: if we wait for further technological progress, we will get more for our climate change mitigation dollars in the future. If environmental protection was mass-produced by the computer industry, and if environmental problems would agree to stand still indefinitely and wait for us to respond, this might be a reasonable approach. In the real world, however, it is a ludicrous and dangerous strategy. Too many years of delay may mean that the polar ice caps melt, the spent uranium leaks out of the containment ponds, the hazardous waste seeps into groundwater and basements and backyards—at which point we can’t put the genie back in the bottle at any reasonable cost (or perhaps not at all). Environmentalists often talk of potential “crises,” of threats that problems will become suddenly and irreversibly worse. In response to such threats, environmentalists and some governments advocate the so-called precautionary principle, which calls upon regulators to err on the side of caution and protection when risks are uncertain. Cost-benefit analysts, for the most part, do not assume the possibility of crisis. Their worldview assumes stable problems, with control costs that are stable or declining over time, and thus finds precautionary investment in environmental protection to be a needless expense. Discounting is part of this non-crisis perspective. By implying that the present cost of future environmental harms 18


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 19

Pricing the Priceless

declines, lockstep, with every year that we look ahead, discounting ignores the possibility of catastrophic and irreversible harms. For this very reason, some prominent economists have rejected the discounting of intangibles. As William Baumol wrote in an important early article on discounting the benefits of public projects: There are important externalities and investments of the public goods variety which cry for special attention. Irreversibilities constitute a prime example. If we poison our soil so that never again will it be the same, if we destroy the Grand Canyon and turn it into a hydroelectric plant, we give up assets which like Goldsmith’s bold peasantry, “their country’s pride, when once destroy’d can never be supplied.” All the wealth and resources of future generations will not suffice to restore them.21 Begging the Question

Extensive discounting of future environmental problems lies at the heart of many recent studies of regulatory costs and benefits that charge “statistical murder.” When the costs and benefits of environmental protection are compared to those of safety rules (like requiring fire extinguishers for airplanes) or medical procedures (like vaccinating children against disease), environmental protection almost always comes out the loser. Why is this so?22 These studies all discount future environmental benefits by at least 5 percent per year. This has little effect on the evaluation of programs, like auto safety rules requiring seat belts and fire safety rules requiring smoke alarms, that could start saving lives right away. However, for environmental programs like hazardous waste cleanups and control of persistent toxins, that save lives in the future, discounting matters a great deal—especially because, as explained above, the benefits are assumed to occur in the future when deaths are avoided, rather than in the near term when risks are reduced. By using discounting, analysts assume the answer to the question they purport to be addressing, which is which programs are most worthwhile. The researchers begin with premises that guarantee that programs designed for the long haul, such as environmental protection, are not as important as programs that look to the shorter term. When repeated without discounting (or with benefits assumed to occur when risks are 19


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 20

Poisoned for Pennies

reduced), these studies support many more environmental programs, and the cry of “statistical murder” rings hollow. Citizens and Consumers—Reprise

The issue of discounting illustrates once again the failure of cost-benefit analysis to take into account the difference between citizens and consumers. Many people advocate discounting on the ground that it reflects people’s preferences, as expressed in market decisions concerning risk. But again, this omits the possibility that people will have different preferences when they take on a different role. The future seems to matter much more to American citizens than to American consumers, even though they are of course the same people. For example, Americans are notoriously bad at saving money on their own, apparently expressing a disinterest in the future. But Social Security is arguably the most popular entitlement program in the United States. The tension between Americans’ personal saving habits and their enthusiasm for Social Security implies a sharp divergence between the temporal preferences of people as consumers and as citizens. Thus private preferences for current over future consumption should not be used to subvert public judgments that future harms are as important as immediate ones.

3. Exacerbating Inequality The third fundamental defect of cost-benefit analysis is that it tends to ignore, and therefore to reinforce, patterns of economic and social inequality. Cost-benefit analysis consists of adding up all the costs of a policy, adding up all the benefits, and comparing the totals. Implicit in this innocuous-sounding procedure is the controversial assumption that it doesn’t matter who gets the benefits and who pays the costs. Yet in our society, concerns about equity frequently do and should enter into debates over public policy. There is an important difference between spending state tax revenues to improve the parks in rich communities, and spending the same revenues to clean up pollution in poor communities. The value of these two initiatives, measured using cost-benefit analysis, might be the same in both cases, but this does not mean that the two policies are equally urgent or desirable. 20


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 21

Pricing the Priceless

The problem of equity runs even deeper. Benefits are typically measured by willingness to pay for environmental improvement, and the rich are able and willing to pay for more than the poor. Imagine a costbenefit analysis of siting an undesirable facility, such as a landfill or incinerator. Wealthy communities are willing to pay more for the benefit of not having the facility in their backyards; thus the net benefits to society as a whole will be maximized by putting the facility in a low-income area. (Note that wealthy communities do not actually have to pay for the benefit of avoiding the facility; the analysis depends only on the fact that they are willing to pay.) This kind of logic was made (in)famous in a 1991 memo circulated by Lawrence Summers (who later served as Secretary of the Treasury and as President of Harvard University), when he was the chief economist at the World Bank. Discussing the migration of “dirty industries” to developing countries, Summers’ memo explained: The measurements of the costs of health impairing pollution depend . . . on the foregone earnings from increased morbidity and mortality. From this point of view a given amount of health impairing pollution should be done in the country with the lowest cost, which will be the country with the lowest wages. I think the economic logic behind dumping a load of toxic waste in the lowest wage country is impeccable and we should face up to that.23

After this memo became public, Brazil’s then-Secretary of the Environment José Lutzenberger wrote to Summers: Your reasoning is perfectly logical but totally insane. . . . Your thoughts [provide] a concrete example of the unbelievable alienation, reductionist thinking, social ruthlessness and the arrogant ignorance of many conventional ‘economists’ concerning the nature of the world we live in.24

If decisions are based strictly on cost-benefit analysis and willingness to pay, most environmental burdens will end up being imposed on the countries, communities, and individuals with the least resources. This theoretical pattern bears an uncomfortably close resemblance to reality. Cost-benefit methods should not be blamed for existing patterns of environmental injustice; we suspect that pollution is typically dumped on the poor without waiting for formal analysis. Still, cost-benefit analysis 21


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 22

Poisoned for Pennies

rationalizes and reinforces the problem, allowing environmental burdens to flow downhill along the income gradients of an unequal world. It is hard to see this as part of an economically optimal or politically objective method of decision making. In short, equity is an important criterion for evaluation of public policy, but it does not fit into the cost-benefit framework. The same is true of questions of rights and morality, principles that are not reducible to monetary terms. Calculations that are acceptable, even common sense, for financial matters can prove absurd or objectionable when applied to moral issues, as shown by the following example. A financial investment with benefits worth five times its costs would seem like an obviously attractive bargain. Compare this to the estimates that front airbags on the passenger side of automobiles may cause one death, usually of a child, for every five lives saved. If we really believed that lives—even statistical lives—were worth $6 million, or any other finite dollar amount, endorsing the airbags should be no more complicated than accepting the financial investment. However, many people do find the airbag trade-off troubling or unacceptable, implying that a different, nonquantitative value of a life is at stake here. If a public policy brought some people five dollars of benefits for every one dollar it cost to others, the winners could in theory compensate the losers. No such compensation is possible if winning and losing are measured in deaths rather than dollars.25 In comparing the deaths of adults prevented by airbags with the deaths of children caused by airbags, or in exploring countless other harms that might be mitigated through regulation, the real debate is not between rival cost-benefit analyses. Rather, it is between environmental advocates who frame the issue as a matter of rights and ethics, and others who see it as an acceptable area for economic calculation. That debate is inescapable, and is logically prior to the details of evaluating costs and benefits.

4. Less Objectivity and Transparency A fourth fundamental flaw of cost-benefit analysis is that it is unable to deliver on the promise of more objective and more transparent decision

22


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 23

Pricing the Priceless

making. In fact, in most cases, the use of cost-benefit analysis is likely to deliver less objectivity and less transparency. For the reasons we have discussed, there is nothing objective about the basic premises of cost-benefit analysis. Treating individuals solely as consumers, rather than as citizens with a sense of moral responsibility to the larger society, represents a distinct and highly contestable worldview. Likewise, the use of discounting reflects judgments about the nature of environmental risks and citizens’ responsibilities toward future generations which are, at a minimum, debatable. Because value-laden premises permeate cost-benefit analysis, the claim that cost-benefit analysis offers an “objective” way to make government decisions is simply bogus. Furthermore, as we have seen, cost-benefit analysis relies on a byzantine array of approximations, simplifications, and counterfactual hypotheses. Thus, the actual use of cost-benefit analysis inevitably involves countless judgment calls. People with strong, and clashing, partisan positions will naturally advocate that discretion in the application of this methodology be exercised in favor of their positions, further undermining the claim that cost-benefit analysis is objective. Perhaps the best way to illustrate how little economic analysis has to contribute, objectively, to the fundamental question of how clean and safe we want our environment to be is to refer again to the controversy over cost-benefit analysis of EPA’s regulation of arsenic in drinking water. As Cass Sunstein has argued, the available information on the benefits of arsenic reduction supports estimates of net benefits from regulation ranging from less than zero, up to $560 million or more. The number of deaths avoided annually by regulation is, according to Sunstein, between 0 and 112.26 A procedure that allows such an enormous range of different evaluations of a single rule is certainly not the objective, transparent decision rule that its advocates have advertised. These uncertainties arise both from the limited knowledge of the epidemiology and toxicology of exposure to arsenic, and from the controversial series of assumptions required for the valuation and discounting of costs and (particularly) benefits. As Sunstein explained, a number of different positions, including most of those heard in the controversy over arsenic regulation, could be supported by one or another reading of the evidence.27

23


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 24

Poisoned for Pennies

Some analysts might respond that this enormous range of outcomes is not possible if the proper economic assumptions are used; if, for example, human lives are valued at $6 million apiece and discounted at a 5 percent yearly rate (or, depending on the analyst, other favorite numbers). But these assumptions beg fundamental questions about ethics and equity, and one cannot decide whether to embrace them without thinking through the whole range of moral issues they raise. Yet once one has thought through these issues, there is no need then to collapse the complex moral inquiry into a series of numbers. Pricing the priceless merely translates our inquiry into a different, and foreign, language, one with a painfully impoverished vocabulary. For many of the same reasons, cost-benefit analysis also generally fails to achieve the goal of transparency. Cost-benefit analysis is a complex, resource-intensive, and expert-driven process. It requires a great deal of time and effort to attempt to unpack even the simplest cost-benefit analysis. Few community groups, for example, have access to the kind of scientific and technical expertise that would allow them to evaluate whether, intentionally or unintentionally, the authors of a cost-benefit analysis have unfairly slighted the interests of the community or some of its members. Few members of the public can meaningfully participate in the debates about the use of particular regression analyses or discount rates that are central to the cost-benefit method. The translation of lives, health, and nature into dollars also renders decision making about the underlying social values less rather than more transparent. As we have discussed, all of the various steps required to reduce a human life to a dollar value are open to debate and subject to uncertainty. However, the specific dollar values kicked out by cost-benefit analysis tend to obscure these underlying issues rather than encourage full public debate about them.

Practical Problems The previous section showed that there are deep, inherent problems with the theory of cost-benefit analysis. In practice, these problems only get worse; leading examples of cost-benefit analysis fall far short of the theoretical ideal. The continuing existence of these practical problems fur-

24


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 25

Pricing the Priceless

ther undercuts the utility and wisdom of using cost-benefit analysis to evaluate environmental policy.

The Limits of Quantification Cost-benefit studies of regulations focus on quantified benefits of the proposed action and generally ignore other, nonquantified health and environmental benefits. This raises a serious problem because many benefits of environmental programs, including the prevention of many nonfatal diseases and harms to the ecosystem, either have not been quantified or are not quantifiable at this time. Indeed, for many environmental regulations, the only benefit that can be quantified is the prevention of cancer deaths. On the other hand, one can virtually always come up with some number for the costs of environmental regulations. Thus, in practice, costbenefit analysis tends to skew decision making against protecting public health and the environment. For example, regulation of workers’ exposure to formaldehyde is often presented as the extreme of inefficiency, supposedly costing $72 billion per life saved. This figure is based on the finding that the regulation prevents cancers, which occur only in small numbers but which have been thoroughly evaluated in numerical terms. But the question is not just one of preventing cancer. Formaldehyde regulation also prevents many painful but nonfatal illnesses that are excluded from the $72 billion figure. If described solely as a means of reducing cancer, the regulation would indeed be very expensive. But if described as a means of reducing cancer and other diseases, the regulation makes a good deal of sense. Workplace regulation of formaldehyde is not a bad answer, but it does happen to be an answer to a different question. The formaldehyde case is by no means unique: often the only regulatory benefit that can be quantified is the prevention of cancer. Yet cancer has a latency period of between five and forty years. When discounted at 5 percent, a cancer death forty years from now has a “present value” of only one-seventh of a death today. Thus, one of the benefits that can most often be quantified—allowing it to be folded into cost-benefit analysis— is also one that is heavily discounted, making the benefits of preventive regulation seem trivial.

25


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 26

Poisoned for Pennies

Ignoring What Cannot Be Counted Many advocates of cost-benefit analysis concede that the decisionmaking process must make some room for nonquantitative considerations. Some environmental benefits have never been subjected to rigorous economic evaluation. Other important considerations in environmental protection, such as the fairness of the distribution of environmental risks, cannot be quantified and priced. In practice, however, this kind of judgment is often forgotten, or even denigrated, once all the numbers have been crunched. No matter how many times the EPA, for example, says that one of its rules will produce many benefits, such as the prevention of illness or the protection of ecosystems, that cannot be quantified, the nonquantitative aspects of its analyses are almost invariably ignored in public discussions of its policies. When, for example, EPA proposed strengthening the standard for arsenic in drinking water, it cited many human illnesses that would be prevented by the new standard but that could not be expressed in numerical terms. Subsequent public discussion of EPA’s cost-benefit analysis of this standard, however, inevitably referred only to the numerical analysis and forgot about the cases of avoided illness that could not be quantified.

Overstated Costs There is also a tendency, as a matter of practice, to overestimate the costs of regulations in advance of their implementation. This happens in part because regulations often encourage new technologies and more efficient ways of doing business; these innovations reduce the cost of compliance. It is also important to keep in mind, when reviewing cost estimates, that they are usually provided by the regulated industry itself, which has an obvious incentive to offer high estimates of costs as a way of warding off new regulatory requirements. One study found that costs estimated in advance of regulation were more than twice actual costs in eleven out of twelve cases.28 Another study found that advance cost estimates were more than 25 percent higher than actual costs for fourteen out of twenty-eight regulations; advance estimates were more than 25 percent too low in only three of the twentyeight cases.29 Before the 1990 Clean Air Act Amendments took effect, 26


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 27

Pricing the Priceless

industry anticipated that the cost of sulfur reduction under the amendments would be $1,500 per ton. In 2000, the actual cost was less than $150 per ton. Of course, not all cost-benefit analyses overstate the actual costs of regulation. But given the technology-forcing character of environmental regulations, it is not surprising to find a marked propensity to overestimate the costs of such rules. In a related vein, many companies have begun to discover that environmental protection can actually be good for business in some respects. Increased energy efficiency, profitable products made from waste, and decreased use of raw materials are just a few of the cost-saving or even profitmaking results of turning more corporate attention to environmentally protective business practices. Cost-benefit analyses typically do not take such money-saving possibilities into account in evaluating the costs of regulation.

The Many Alternatives to Cost-Benefit Analysis A common response to the criticisms of cost-benefit analysis is a simple question: what’s the alternative? The implication is that despite its flaws, cost-benefit analysis is really the only tool we have for figuring out how much environmental protection to provide. This is just not true. Indeed, for more than thirty years, the federal government has been protecting human health and the environment without relying on cost-benefit analysis. The menu of regulatory options that has emerged from this experience is large and varied. Choosing among these possibilities depends on a variety of case-specific circumstances, such as the nature of the pollution involved, the degree of scientific knowledge about it, and the conditions under which people are exposed to it. As the following brief sketch of alternatives reveals, cost-benefit analysis—a “one-size-fits-all” approach to regulation—cannot be squared with the multiplicity of circumstances surrounding different environmental problems. For the most part, environmental programs rely on a form of “technology-based” regulation, the essence of which is to require the best available methods for controlling pollution. This avoids the massive research effort needed to quantify and monetize the precise harms caused 27


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 28

Poisoned for Pennies

by specific amounts of pollution, which is required by cost-benefit analysis. In contrast, the technology-based approach allows regulators to proceed directly to controlling emissions. Simply put, the idea is that we should do the best we can to mitigate pollution we believe to be harmful. Over the years, EPA has learned that flexibility is a good idea when it comes to technology-based regulation, and thus has tended to avoid specifying particular technologies or processes for use by regulated firms; instead, the agency has increasingly relied on “performance-based” regulation, which tells firms to clean up to a certain, specified extent, but doesn’t tell them precisely how to do it. Technology-based regulation generally takes costs into account in determining the required level of pollution control, but does not demand the kind of precisely quantified and monetized balancing process that is needed for cost-benefit analysis. Another regulatory strategy that has gained a large following in recent years is the use of “pollution trading,” as in the sulfur dioxide emissions trading program created for power plants under the 1990 Clean Air Act Amendments. That program grants firms a limited number of permits for pollution, but also allows them to buy and sell permits. Thus, firms with high pollution control costs can save money by buying permits, while those with low control costs can save money by controlling emissions and selling their permits. The fixed supply of permits, created by law, sets the cap on total emissions; the trading process allows industry to decide where and how it is most economical to reduce emissions to fit under the cap. Trading programs have become an important part of the federal program for controlling pollution. These programs, too, have not used cost-benefit analysis in their implementation. Congress, EPA, or other officials set the emissions cap, and the market does the rest. It is theoretically possible that cost-benefit analysis could be used to choose the overall limit on pollution that guides both performance-based and market-based regulatory programs. However, this has not been standard practice in the past; the limit on sulfur emissions in the 1990 Clean Air Act Amendments, for example, was set by a process of political compromise. Given the problems with cost-benefit analysis, political compromise cannot be viewed as an inferior way to set a cap on emissions. Many regulatory programs have been a terrific success without using costbenefit analysis to set pollution limits. 28


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 29

Pricing the Priceless

One last example is informational regulation, which requires disclosures to the public and/or to consumers about risks they face from exposures to chemicals. These “right-to-know� regimes allow citizens and consumers not only to know about the risks they face, but also empower them to do something about those risks. The Toxics Release Inventory created by the Emergency Planning and Community Right-to-Know Act, the product warning labels required by California’s Proposition 65, and the consumer notices now required regarding drinking water that contains hazardous chemicals, are all variants of this type of informationbased regulation. Not one of these popular and effective programs relies on cost-benefit analysis. The arguments for flexible, technology-based regulation and for incentive-based programs like pollution trading and disclosure requirements are sometimes confused with the arguments for cost-benefit analysis. But both technology-based and incentive-based regulation take their goals from elected representatives rather than from economic analysts, even though the means adopted by these regulatory strategies are strongly influenced by attention to costs. The current style of costbenefit analysis, however, purports to set the ends, not just the means, of environmental policy, and that is where its aspirations amount to arrogance. Economic analysis has had its successes and made its contributions; it has taught us a great deal over the years about how we can most efficiently and cheaply reach a given environmental goal. It has taught us relatively little, however, about what our environmental goals should be. Indeed, while economists have spent three decades wrangling about how much a human life, or a bald eagle, or a beautiful stretch of river is worth in dollars, ecologists, engineers, and other specialists have gone about the business of saving lives and eagles and rivers, without waiting for formal, quantitative analysis proving that saving these things is worthwhile.

Conclusion Two features of cost-benefit analysis distinguish it from other approaches to evaluating the advantages and disadvantages of environmentally protective regulations: the translation of lives, health, and the natural environment into monetary terms, and the discounting of harms to human 29


ip-ackerman2.000-000:Layout 1

4/18/08

9:37 AM

Page 30

Poisoned for Pennies

health and the environment that are expected to occur in the future. These features of cost-benefit analysis make it a terrible way to make decisions about environmental protection, for both intrinsic and practical reasons. Nor is it useful to keep cost-benefit analysis around as a kind of regulatory tag-along, providing information that regulators may find “interesting” even if not decisive. Cost-benefit analysis is exceedingly time- and resource-intensive, and its flaws are so deep and so large that this time and these resources are wasted on it. Once a cost-benefit analysis is performed, its bottom-line number offers an irresistible sound bite that inevitably drowns out more reasoned deliberation. Moreover, given the intrinsic conflict between cost-benefit analysis and the principles of fairness that animate, or should animate, our national policy toward protecting people from being hurt by other people, the results of cost-benefit analysis cannot simply be “given some weight” along with other factors, without undermining the fundamental equality of all citizens—rich and poor, young and old, healthy and sick. Cost-benefit analysis cannot overcome its fatal flaw: it is completely reliant on the impossible attempt to price the priceless values of life, health, nature, and the future. Better public policy decisions can be made without cost-benefit analysis by combining the successes of traditional regulation with the best of the innovative and flexible approaches that have gained ground in recent years.

30


Diagnosis: Mercury Money, Politics, and Poison Jane M. Hightower, M.D.

Contents

Paperback $25.00 ISBN: 9781610910026

Preface Chapter 1: The Discovery Chapter 2: Finding My Way Chapter 3: The Media Meets the Victims Chapter 4: Spreading the News Chapter 5: A Spoonful of Mercury Chapter 6: Making Money with a Menace Chapter 7: The Summit Chapter 8: Feeling the Heat in Mercury Politics Chapter 9: The Canadian Mercury Scare Chapter 10: Dr. Sa’adoun al-Tikriti Chapter 11: Fishy Loaves Chapter 12: Fishing with the FDA for Evidence in Iraq Chapter 13: Fishing with the Industry for Evidence in Iraq Chapter 14: From American Samoa to Peru Chapter 15: The Political Realm of Seychelles versus Faroes Chapter 16: The Mercury Study Report Chapter 17: Strategic Errors and Redundant Tactics Chapter 18: The Canning of Proposition 65 Mercury Warnings Chapter 19: Diagnosis Mercury Notes Bibliography Index


Chapter 19

Diagnosis Mercury I BEGAN INVESTIGATING my patients’ elevated blood mercury, I sought to gain a clear understanding of how mercury affects one’s health. When information was not at my fingertips, I wondered why. What began as an investigation to help me diagnose mercury-related symptoms in my patients grew into another diagnosis —that of a broken, misused, and abused regulatory system. The entire regulatory process needs an overhaul, along with the way we assess environmental factors in disease. The current system is still too susceptible to abuses from corporations that pay “scientists” to say whatever it is they want them to say, and to have undue influence over the writing of government reports and policy. The researchers who have the least amount of bias are often the ones not attached to any big moneygathering organizations and are therefore poorly funded to do careful and much-needed research on the effects of environmental toxicants. Some of our brightest researchers will continue to be recruited by the industry that has the finances to hire them; therefore, open books and peer review will be important factors to ensure that their science is sound. In the global scheme, methylmercury, whether natural or added by humans, is in our fish. The distinction of natural versus adulterated has no bearing on how I address individuals at risk in my office. The argument of natural versus adulterated exists only because of how lawmakers and the food industry wrote the rules of labeling for disclosure to the consumer. It is a way to escape tighter regulation on unhealthful contaminants in food and allow the food industry to sell its products without much challenge to the contaminants’ health effects.

W

HEN

239


240

Diagnosis: Mercury

As for coal-fired power plants and other industries that emit mercury, we at least need to enforce the present laws to keep mercury and other toxicants from further contaminating the environment. China is a long-recognized major contributor to the world’s mercury pollution. The many smokestacks bellowing out mercury and other toxicants can affect the health of the people in that nation as well as their neighbors. Locally deposited mercury finds its way into the fish and other foods. China is a fish-loving nation, but so is the United States. We have coalfired power plants that emit mercury, too, and they deposit their toxicants locally and broadly. For every 10 mcg/l elevation in blood mercury for a pregnant woman, there can be a drop in the intelligence quotient (IQ) of the child. A drop in intelligence of only five points over a population means fewer people in the very smart category and more who will need extra help in their lives, resulting not only in many lost personal opportunities but also in a strain on the society. Researchers have estimated the impact of lower IQ from power plant mercury emissions for the lifetime of the individual in regard to diminished economic productivity alone—for a cohort of exposed children in the United States, the loss was estimated to be about $8 billion annually (dollars in 2000).1 While there have been gains in pollution abatement in the United States since the early 1970s, much is yet to be accomplished, and mercury remains a relatively neglected area. In 1975, W. Eugene Smith wrote in the prologue to Minamata that when he was about to venture to Japan to investigate the Minamata disaster in the early 1970s, a scientist at a conference on mercury in Rochester approached to ask, “Why go there? It’s finished in Minamata. What are you going to find to photograph?” Smith went on, “Not only did we find that ‘it’ was not finished, but we became convinced that ‘it’—whether the poison is mercury, or asbestos, or food additives, or radiation, or something else—is closing more tightly upon us each day. Pollution growth is still running far ahead of any anti-pollution conscience.”2 The scientist who thought the Smiths did not need to go to Japan, according to Aileen Smith, was, ironically, Thomas Clarkson.3 In 2008, our regulatory system is still struggling to keep up with the polluters, and our health care team is still in the dark when it comes to


Chapter 19. Diagnosis Mercury

241

environmental toxicants. As for mercury, recognizing the subjective symptoms from overexposure that precede “overt” toxicity is important to diagnose. Those who have the lesser symptoms—such as fatigue, headache, upset stomach, and the like—also have reduced productivity, as they incur more days off of work and are not as productive when they are at work. One of my patients in the mercury study went on temporary disability, while two others took early retirement. The patients who had side effects from their mercury exposure also incurred medical bills in the thousands of dollars in order to confirm the diagnosis. These expenses and distress could have been altogether avoided with better awareness on the part of the public and medical establishment and better attention from the FDA and other regulatory agencies. If we allow industry-funded researchers to define what constitutes toxicity, we run the risk of failing to properly diagnose and remedy the situation. As we’ve seen in one court case after another, there’s been little focus on the serious harm that chronic mercury exposure may do, and the canneries and other interests have had a huge effect on what happens in these cases. What, then, are the main concerns that are worrying researchers today about methylmercury? As for children, because they are still growing brains and methylmercury interferes with a cell’s ability to divide, the effects of methylmercury can be permanent. In-utero, chronic, low-end exposures show up later in school in the form of reduced performance on some tests of language, coordination, and intelligence. Adverse cardiac effects and other neurological and psychiatric effects are still being investigated. As for childhood development in the face of methylmercury, more studies are certainly needed. Fortunately, a cohort study in the United States is now under way. In 1999–2002, 341 mother-child pairs in Massachusetts entered into a study to investigate the effects of mercury when the children were exposed in utero. The mother’s red blood cells were tested for mercury in the second trimester. (0.0–9.0 mcg/l blood.) The authors, some of whom were consultants of the food industry, concluded that neurodevelopmental testing of the children at three years of age yielded the following observations: Modest mercury eleva-


242

Diagnosis: Mercury

tion was associated with lower childhood cognition at exposure levels substantially lower than in populations previously studied; no safe level for methylmercury was found in their study; and if mercury contamination was not present, the cognitive benefits of fish intake would be greater.4 As for what to do with such information, physicians and consumers need to be ever more diligent as to the types of fish and the quantity consumed, along with contaminant levels. In recent years, a number of health care workers and patients have called my office to ask my opinion as to what to do about an elevated mercury level found during a pregnancy. One woman, age thirty-eight, was twelve weeks pregnant with her first child when she discovered that her blood level was 48 mcg/l. The test was repeated and confirmed. The patient had seen on the news that there were high mercury levels in the fish that she had been consuming during pregnancy, so she asked her gynecologist for the test. When the result came in, her genetics counselor called to ask my opinion on the matter. Since there was no way to accurately predict the outcome for this mother’s child, much soul searching transpired. She decided to go through with the pregnancy and delivered a healthy baby. It is too early to tell whether the child will have any adverse effects, but this mother decided to accept whatever was given to her regardless of any mercury effects that may become apparent in her child’s future. Other women are now finding themselves in similar predicaments. Perhaps if the FDA practiced face-to-face clinical medicine with the public and realized the anguish of the people who are affected by the impact of mercury, they would stop hiding behind ghostly advisories and toothless policies. When it comes to chronic methylmercury exposure with fish consumption in adults, research attention encompasses cardiovascular disease, nonspecific symptoms, adverse neuropsychiatric effects, infertility, and autoimmune effects. The effects of mercury on the adult brain have been most commonly known in the hatters, but that was inorganic mercury. Now evidence is gathering that long-term methylmercury exposure can also lead to damage to the adult brain, including a diminished capacity to process information, reduced mental functioning, forgetfulness, insom-


Chapter 19. Diagnosis Mercury

243

nia, and abnormal eye movements. The threshold at which these effects will occur or become permanent is still being investigated, as we’ve seen; therefore, it would be wise not to chronically elevate your blood mercury level over 5 mcg/l.5 Many of my fish-loving patients will get a blood mercury test as part of their physical exam to see if they are maintaining this. We also look at their diet as a whole and what they consumed in the week before the test, and then make adjustments as necessary. Infertility induced by methylmercury exposure is a growing concern. Several studies indicate that chronic exposure to methylmercury can affect several aspects of semen: reduced concentration of sperm, lower percentage of morphologically normal sperm, lower percentage of mobile sperm, reduced velocity and less movement of sperm in general. These changes resulted in reduced fertility among men. Females with unexplained infertility also have sometimes been noted to have higher mercury levels.6 Perhaps the most difficult effect to sort out is the association of methylmercury with autoimmune disease. Autoimmune disease occurs when antibodies to one’s own tissues and blood, called autoantibodies, are made. Humans are capable of making numerous autoantibodies, and new ones are discovered every year. Interestingly, multiple types of agents, including viruses and pollutants, can trigger their production in the human body. Not everyone can form these various autoantibodies, and genetic predisposition is a key factor in autoantibody production. That various forms of mercury can trigger production of autoantibodies in animals has been shown in a number of studies. Only recently has a similar mercury-induced process begun to be looked at in humans, and the threshold at which autoantibodies are triggered in genetically susceptible individuals currently remains unknown. People who themselves or whose family members have a history of autoimmune disease, such as lupus, thyroiditis, scleroderma, autoimmune kidney disease, or rheumatoid arthritis, should consume foods and omega-3 fatty acid supplements from sources that have little or no mercury contamination until further research can be conducted.7 When I looked at high-end fish consumers among my patient pop-


244

Diagnosis: Mercury

ulation, many of them not only had the symptoms that had been described throughout the literature, but also had diseases and conditions with which mercury had been correlated. Atherosclerosis, myocardial infarctions, strokes, autoantibodies, neuropsychiatric complaints, infertility, and nonspecific symptoms were all present in my study population and among subsequent patients coming to me for a mercury consultation. In medicine, we many times cannot prove cause and effect, but we can say that a symptom profile is consistent with a diagnosis. It is the in-the-trenches observations like these that could lead to larger studies so that the full spectrum of the effects of methylmercury on humans can be determined. In the 1990s, Japanese researchers looked further into their “lesser Minamata� profile.8 They identified subjective symptoms that could easily be missed or denied by patients with an economic interest in the outcome of the study or overstated by those wishing to win a lawsuit against the polluters. The researchers found statistically significant correlations between mercury consumption and symptoms studied as follows: muscle stiffness, dysesthesia (abnormal sensations, especially the sense of touch), hand tremor, dizziness, loss of pain sensation, muscle cramps, upper arm muscular atrophy, joint pain, back pain, leg tremor, ringing in the ears, muscular atrophy, chest pain, palpitations, fatigue, visual dimness, and staggering. Symptoms with statistical significance for men only were thirst and difficulty with urination. Those with statistical significance in women only were muscular weakness, urine incontinence, forgetfulness, and insomnia. The researchers concluded that chronic methylmercury poisoning shows a late onset of symptoms or an increase in symptoms after several years of exposure and has atypical and subclinical features unlike those of classic Minamata disease with its overt methylmercury symptoms. The problem with this study, though, was that the control population was still being exposed to methylmercury, so statistical significance for other symptoms could have been missed. My patient population had various of the above complaints and also hair loss, headache, trouble thinking and performing complex tasks, memory loss, insomnia, gastrointestinal upset, abdominal pain, metallic taste, and nausea. Those symptoms have also been


Chapter 19. Diagnosis Mercury

245

reported in the literature in association with mercury exposure.9 My patients had symptoms that have been correlated with methylmercury exposure from fish consumption, and they had corresponding elevated mercury levels, along with chronic exposure to methylmercury that was consistent with the above study. Their diagnosis was therefore consistent with lesser Minamata. Regardless of how small the population or how unordinary, there are people in the United States who have lesser Minamata, mercury poisoning, or subjective methylmercury poisoning or fish fog, by consuming oceangoing fish. This syndrome continues to be suppressed by industry-funded researchers and the industries themselves—both the polluters and the fishing industry. It is still unclear to me what role the FDA is trying to play in this issue. Some people, like Bettye Russow, have overt methylmercury poisoning. Such sufferers have been here all the time, buried in the rat race and unrecognized. Most of my patients appear to have recovered from their nonspecific symptoms after they stopped consuming the methylmercury and their blood levels went down to less than 5 mcg/l. Some took two to six months, while others took more than a year to recover, and a few claimed that their mental function was never the same. The ones who took longer to recover tended to be those who had autoantibodies such as antithyroglobulin, antimicrosomal, and antinucleolar antibodies. A curious finding was that five patients were diagnosed with autoimmune liver disease, which is not a common diagnosis. I had no autoimmune liver disease patients in the rest of my practice. The liver specialists and I have continued to follow those patients for the last seven years. More research needs to be done to see if there is a correlation between mercury and this disease, as the liver is a major organ in the body where mercury tends to accumulate-and mercury is known to stimulate autoantibodies. Another important observation was that a few individuals who claimed no symptoms at all had blood mercury levels up to 29 mcg/l. Among the few children in my study, some have had persistent damage consistent with mercury exposure, such as trouble with learning and memory. After I had the patients stop their mercury intake, or had them


246

Diagnosis: Mercury

choose only low-mercury fish, the ones who had complained of muscle cramps had a curious experience. They would slowly get better after stopping the mercury consumption, then, when the mercury levels were approaching the point where they would be expected to be less than 2 mcg/l in blood, severe muscle cramps would occur for a day or two. The patients would feel better thereafter. Other colleagues have observed this in some of their patients, as well. I and others in the field have noticed that there is one neurological test in particular that many adults whose blood mercury level is higher than 20 mcg/l have difficulty performing. This is patterned sequential movements such as pat-acake, even though they knew how to do it. This resolves after blood mercury is reduced to less than 5 mcg/l. When having any symptoms that are persistent or bothersome, patients need to be in the hands of a good internal medicine or family practitioner (primary care physician) who will investigate all causes, as mercury is a diagnosis of exclusion. Over the last eight years, I have diagnosed many other conditions in people who thought mercury was causing their problem. In some of these cases, whether mercury contributed was a moot point, as the mercury side effects can mostly be resolved by stopping consumption, while a cancer or other serious condition will not be cured just because you stopped consuming mercury. Sometimes patients can have symptoms of mercury side effects and a serious condition. As I mentioned earlier, the point at which lesser symptoms will lead to permanent damage is unknown at this time. For healthy living, I encourage all my patients to eat a balanced diet that includes fruits, vegetables, nuts and seeds, and healthy omega-3 choices and avoid processed foods with artificial ingredients. I also suggest pleasant exercise and medications only when indicated. For those at risk, I monitor the methylmercury intake through dietary recall and blood or hair analysis. Stress reduction and adequate sleep are emphasized, as well. One of the biggest dilemmas in the realm of health effects of fish contaminants is the possible tradeoff between the contribution of methylmercury to increased risk of heart problems compared to the benefits of omega-3 fatty acids. The best review of the literature on evi-


Chapter 19. Diagnosis Mercury

247

dence of this health threat so far comes from an ongoing study of Finnish men. Researchers showed that men with mercury levels in their hair of 2.03 mcg/g and greater (equivalent to about 8 mcg/l in whole blood) had an increased risk of heart disease, heart attack, and risk of death by about 1.6 times compared to those with hair mercury levels of less than 2.03 mcg/g. Those with the lowest mercury levels and with an elevated level of the good omega-3 fats in their blood had the least risk of heart disease.10 When I began looking at the cardiac studies and methylmercury human epidemiologic studies, I was especially curious about the cardiovascular disease risk of people in the Faroes and Seychelles. The Seychelles diet contained fish with mercury at a level that gave them the distinction of being one of the most methylmercury-exposed populations in the world. Cardiovascular disease was the number one killer in the Seychelles and accounted for 41 percent of all mortality. Of course, there could be a number of reasons for that, as there were multiple risk factors for heart disease, such as obesity, hypertension, diabetes, and smoking, evident in the population. I have not seen any published reports on methylmercury and heart disease in the Seychelles, however.11 It is interesting to note that the Faroese also have considerable heart disease risk, but their morbidity and mortality have declined in the last eight years—as have their mercury levels. In 2001, the risk of death from cardiovascular disease among the Faroese was 40 percent. In the United States, heart disease accounted for 29 percent of all deaths. A high heart disease rate in populations that consume a lot of fish and methylmercury does not lend support to the notion that the benefits of omega-3 fatty acids outweigh the risk of methylmercury.12 The American Heart Association currently recommends that the average person eat a variety of preferably oily fish at least twice a week, as well as oils and other foods rich in alpha linolenic acid, such as flaxseed, canola, and soybean oils, and walnuts. For those with documented coronary heart disease, an intake of 1,000 mg per day of omega3 fatty acids, eicosapentaenoic acid and docosahexaenoic acid (EPA/DHA), either as fatty fish or as supplements, is recommended as


248

Diagnosis: Mercury

of this writing. According to the mercury and omega charts provided by the AHA and the Environmental Protection Agency, however, it would take twelve ounces of light tuna or four ounces of white tuna to give an individual 1,000 mg of EPA/DHA. The mercury content, though, would then be higher than 30 mcg/day, which in turn could raise the risk of heart disease. A person who consumed 2 to 3 ounces of wild salmon or sardines instead could get the omega-3 fatty acids with about ten times less methylmercury. Two ounces per day of herring or wild salmon would not be expected to raise a person’s blood mercury over 5 mcg/l.13 Other non-fish sources of omega-3 fatty acids are some types of pigs, flax-fed chickens and their eggs, milk with an omega-3 supplement additive, and grass-fed beef. My undergraduate alma mater of California State University at Chico recently tested the fat content of beef fed with grain only, a grain and grass combination, and grass only. The grass-only diet significantly lowered the saturated fatty acid and raised the omega-3 fatty acid content in the meat by 40 percent. Competition in sources of supply will be a healthy addition to the omega-3 discussion.14 For those who prefer supplements for omega-3s, the supplement industry has made efforts to distill out the contaminants and purify the EPA/DHA product. Your supplement label should state that it has been tested for the content of contaminants and EPA/DHA. Omega-3 fatty acids in supplement form with concomitant use of aspirin (antiplatelet drugs) or coumadin or heparin (anticoagulant drugs) is currently not known. Since these combinations can potentially increase bleeding in the body, I recommend that you check with your doctor before you begin or stop your supplements.15 Variety is the key. I cannot stress this enough. No one should be consuming either salmon or tuna every day, so mix it up. Rotate your poisons. Eat your fruits and vegetables. Beyond that general advice, your consumption patterns, the type of fish caught or purchased, and circumstances such as your reproductive status and concurrent health should all be taken into consideration. The more fish one consumes, the lower in contaminants one would want it to be. Those who desire


Chapter 19. Diagnosis Mercury

249

to consume fish every day need to be particularly diligent in educating themselves on mercury levels and other contaminants in different kinds of fish in their area. Even as recently as November 2006, I had a physician in his sixties consult me about his own blood mercury level of 25 mcg/l who knew nothing about any advisory from EPA, FDA, AHA, NAS, ATSDR, or a host of other acronymic agencies. He was suffering from fatigue, memory loss, tremor, and generalized ill feeling that had been waxing and waning for months. He had increased his canned tuna consumption to lose weight. Since the discovery that his problem could be related to his intake of methylmercury, he became diligent at keeping his mercury blood levels less than 5 mcg/l. At his one-year followup, he was doing well. Should you want to estimate your likely methylmercury intake, here is some information that might be helpful. In calculating your own exposure to methylmercury using the FDA tables on the Web, it is important to know that the ranges of mercury level in the fish are not true ranges. It was recently disclosed by the FDA that each test they conducted was actually on a blended mixture of twelve different fish. The average, therefore, is an average of averages, and the highest number given in the range is lower than if individual fish were tested each time. Some nongovernment groups have been assisting the public in these calculations. Web sites such as http://gotmercury.org provide useful tools for those who want to continue to enjoy fish and maintain their mercury level at less than 5 mcg/l. Hopefully, more information on PCB levels will also become readily available. To make sense of the FDA lists of fish with their mercury levels when you go to a grocery store or restaurant, there are several cautions needed. Restaurants will sometimes substitute one fish for another, and there is no regulation against it. This substitution is usually based on taste and availability and without consideration of the amount of mercury or PCBs in the substitution. Be sure you do not confuse the size of the portion with the size of the fish. The steak fish, such as tuna, halibut, and swordfish, are only a small piece of a large fish. Consider also that the common names of fish can change depending on the region where they are sold.


250

Diagnosis: Mercury

In choosing fish, as mentioned before, it is the large predatory fish such as swordfish, shark, king mackerel, tilefish, large tuna, sailfish, sea bass, grouper, pike, and large halibut that are usually longer lived, more predacious, and contain more mercury. They can also sometimes accumulate PCBs. Essentially, the smaller fatty fish, such as sardines, small mackerel, herring, anchovies, and salmon, are higher in omega-3 fatty acids but can be higher in PCBs and other contaminants, as well. These contaminants are contained more in the skin and fat; therefore, letting the juices flow out when you cook them and not eating the skin or fat can reduce your exposure. Methylmercury, on the other hand, is mostly in the muscle tissue, and no cooking methods have been shown to cook it out of the fish.16 For noncommercial game fish, you need to consult your state Department of Fish and Game or the Web to check for the latest testing and advisories in the water body you are going to fish. The best way to assess your exposure is by assessing your dose. There is no way to tell just how much mercury you really have in your system, especially since some organs such as kidney, liver, brain, muscle, gastrointestinal tract, and heart tissue can have varying concentrations within the same body. Testing the mercury level in your hair will let you know your average excretion in the hair over a month’s time of consumption, however. Hair grows about one centimeter per month on average, and mercury is strongly bound in hair. It cannot be washed out. You can check the mercury level at various lengths of your hair; hair will record the levels over time like rings on a tree. (How and whether permanents, straighteners, or dyes affect the hair mercury levels is currently not clearly known.) A blood level will give you a “spot check” and reflects both recent diet and accumulation. ~• ~ The “nuanced” and “targeted” approach of the FDA’s mercury warning system has left many consumers at risk for the contaminants of fish such as mercury and PCBs. The agency’s 1 mcg/g methylmercury-infish action level is not adequate and not enforced, and it was derived using data that was flawed as well as misrepresented, as we’ve seen.


Chapter 19. Diagnosis Mercury

251

The EPA went beyond the action level in fish and identified a tolerable blood limit for mercury in humans and determined a dose that corresponds to that limit. The FDA, unfortunately, has still not fully embraced the EPA’s human limit for mercury. Whether this human limit is protective for adults and the most sensitive individuals for longterm exposure will continue to be debated, considering the cardiac literature, but adoption of that standard by the FDA would be a move in the right direction. The government agencies could do much to involve practicing clinicians in the mercury debate and in discussions over other potentially harmful chemicals to bridge the gap between public health policy and the individual patient. Physicians routinely discuss behavior and risk with individuals. My patients come to my office with their computers in hand. They have access to information just as we do. We as physicians and health care associates need to be there to let the public know how to navigate the information highway and analyze the data so that they can make informed decisions about their health. But there is something wrong with the system when a serious problem with a food item is evident enough that it gains warnings from the FDA, yet physicians across the nation, from the private doctor’s office to the American Medical Association, typically do not know much if anything about it. Confusion over the health effects of mercury is but one example of what happens when money, power, politics, and health collide over a pollutant. Because physicians were unaware of mercury’s relatively lowdose health effects and how to recognize and address them, no case reports from physicians were being filed with the FDA. In turn, the FDA did not see the need to warn physicians and the public and concluded that there was no one in the United States eating enough fish to cause adverse effects from mercury. Our government agencies are inundated with not only the monitoring of food, drugs, and the environment, but also with pressure from industry, special interest groups, and nongovernment organizations, all with a stake in the final ruling. For the medical establishment to play an effective role in these decisions that ultimately affect our patients’ health, environmental medicine and nutrition need to become part of a medical school education. The


252

Diagnosis: Mercury

health care team needs to be involved in understanding environmental issues, learning how those issues may affect the health of patients, and bringing the importance of environmental factors to wider attention when we see their significance at the front lines. People in medicine have an important role to play, but to cope with the range of issues surrounding pollutants such as mercury, it takes public awareness and concern, reform of the regulatory process, scientific investigation free of outside influence, and willingness of industry to take some action for the common good. None of us can possibly know everything. But there’s much we can discover from mercury’s history and effects on its victims, and from listening to each other.


Chasing Molecules Poisonous Products, Human Health, and the Promise of Green Chemistry Elizabeth Grossman

Contents

Paperback $21.95 ISBN: 9781610911610

Prologue Chapter 1: There’s Something in the Air Chapter 2: Swimmers, Hoppers, and Fliers Chapter 3: Laboratory Curiosities and Chemical Unknowns Chapter 4: The Polycarbonate Problem Chapter 5: Plasticizers: Health Risks or Fifty Years of Denial of Data? Chapter 6: The Persistent and Pernicious Chapter 7: Out of the Frying Pan Chapter 8: Nanotechnology: Perils and Promise of the Infinitesimal Chapter 9: Material Consequences: Toward a Greening of Chemistry Epilogue: Redesigning the Future Acknowledgments Appendix: The Twelve Principles of Green Chemistry and Further Information Notes Select Bibliography Index


CH A P TER

NINE

Material Consequences Toward a Greening of Chemistry

“I’m a very skeptical scientist,” says Terry Collins of Carnegie Mellon’s Institute for Green Science. “The stakes of the chemical enterprise are incredibly high.” Proponents of green chemistry, I had quickly discovered, are very much on a mission. All speak with an urgency more typical of political and social campaigns than the practice of science. Then I began to realize that, right now, green chemistry entails all three. “Civilization is highly chemical in its nature,” says Collins, a tall, impressively outspoken native of New Zealand. After the invention and commercialization of the first chemical dye in England in the 1850s, he explains, the chemical industry was “off to the races.” When he talks about chemistry, Collins likes to put current chemical production in historical context, sometimes taking the story all the way back to the Romans’ use of lead. “If it was 1500 and we were in an Italian hill town, you could only impact the people who you meet. Now every time you turn on the car, you’re affecting thousands of children. We can now impact people we never meet with these chemicals. We can affect babies yet to be born. Technology and science have given rise to a whole new category of ethics. There’s no precedent for it.” 159


160

CHASING MOL ECU LES

We must eliminate elements such as lead (in batteries) and mercury (in fluorescent lights), and do better with nanotechnology, he tells me. “We need to use only elements that we [ourselves] are made of in catalysts and polymers we’re going to make commercially,” Collins argues. And it would be better for our health and the environment to use more “recently dead plants rather than fossilized plants—that is, petroleum.” “The underlying assumption that the chemical industry has been built on assumed that any useful chemical commercialized for anything other than drug purposes will not have a profound impact on human health and the environment,” suggests Collins, reminding me, just as John Warner and Paul Anastas have, that chemists typically are given no training in toxicology, an oversight Collins likens to “giving people keys to the car with no drivers’ training.” He also reminds me that of the approximately 80,000 chemicals currently in commerce, only about 4,500 are pharmaceuticals. This means that the vast majority of synthetic chemicals that go into products that surround us were designed with the assumption that their chemical constituents would not be entering our bodies or otherwise affecting ecological systems. “It stretches comprehension and it’s a very flawed concept,” says Collins, “to think that commercialized chemicals like pesticides will not do anything to us.” “Green chemistry,” says Collins, “is a major paradigm shift. And there is no option but for us to do this.” But, he points out, “It’s easier to say what green chemistry isn’t rather than what it is.” “The fundamental concept of green chemistry,” Collins tells me, can be spelled out in an equation: “Risk equals exposure times hazard [Risk = Hazard × Exposure]. As green chemists, let’s try to understand the hazard and try to get the hazard out. We have to turn the aircraft carrier around and get the hazard out.” Another aspect of this metaphorical ship is that we’ve relied on our preferred energy source—petroleum—to supply the base for so many of our current synthetics. “If you don’t have the energy problem fixed, it overwhelms everything else,” notes Collins. “A hundred years ago, the chemical industry was terrible about protecting us from chemicals that kill cells. Now we’re dealing with chemicals that disrupt cellular development, chemicals that interact with DNA


Material Consequences

161

and may cause mutations that can lead to cancer. The stakes of not dealing with endocrine disrupters are very high. We need to address endocrine disrupters from inside chemistry.” It all comes back to chemical design, Collins believes. “The body has a magnificent mechanism for destroying chemicals,” says Collins. And some chemicals need to be persistent. “Drugs must be persistent to work. But when they get into rivers and lakes—what does that mean over the long term?” Yet, he points out—alluding to the endocrine-disrupting compounds found in so many personal care products, cosmetics, gadgets, and textiles—persistent compounds are being used to “gloss up the life of adults while messing up the life of kids. There needs to be a mandate of intergenerational responsibility in a way we’ve never seen before.” “There’s a fracture in the world of research, with research threatening the status quo of corporate culture. Real-time profits are going to be challenged and it’s extremely threatening to certain segments of corporate culture,” says Collins. “How do you respond to a new product when there is a problem? Do you pretend it doesn’t exist? We need to talk about it publicly. These issues really, really matter, and we need to do something about them.” The morning after a presentation he’s given to the Oregon Environmental Council, I have a conversation with Collins over breakfast. “Capitalism can’t work for sustainability without credible government constraints,” he tells me. “We’ve been obsessed by technical performance and entirely missed anticipating bioaccumulation.” When I ask how he got interested in what has become the field of green chemistry, Collins talks about a summer he spent in the early 1970s working for a refrigerator manufacturer in New Zealand. “That’s when I learned how toxic benzene was,” he says. What he discovered was that in the process of sealing the refrigerator liners, hot tar was used. To clean off any hot tar used in sealing the refrigerator liners that splashed on the refrigerators’ enamel, workers used rags dipped in solvent. These workers were getting nosebleeds and headaches, which Collins eventually connected with symptoms of exposure to benzene. It turned out the solvent


162

CHASING MOL ECU LES

being used was indeed made with benzene and with two related aromatic hydrocarbon compounds, xylene and toluene—both of which are highly toxic. It was this event—which ended with the frustration of being told by both company chemists and the New Zealand Institute of Chemistry that there was little that could be done—and a case at about the same time of babies in New Zealand born with birth defects, likely as a result of local pesticide spraying, that made Collins became acutely aware of the toxic perils of industrial chemicals. This concern has stayed with him throughout his career, which includes a 1999 U.S. EPA Presidential Green Chemistry Challenge award for developing what are called “activator chemicals,” catalysts that work with hydrogen peroxide to perform industrial tasks traditionally performed by chlorine bleach. These compounds, which Collins’s research group at Carnegie Mellon call TAML catalysts, can be used in the wood pulp and paper industry where use of chlorine compounds typically results in waste products that include dioxins. By taking chlorine out of the equation and substituting a compound that breaks down into water and oxygen, a basic hazard is eliminated from the entire process. Perhaps even more significantly, these hydrogen peroxide compounds can be designed to detoxify specific persistent pollutants including organochlorine pesticides, estrogen, and estrogenic compounds.1 A number of these compounds have been patented by Carnegie Mellon and are now being adapted for industrial use. These catalysts are “ligands,” which Collins explains is simply the name for a grouping of atoms that are placed around a catalytic metal site that conveys particular biochemical and physical properties to that metal. While these compounds can be created through chemical engineering, they also exist in nature. One example of naturally occurring—and critically important—ligands are the enzymes based on iron found in the liver. Their job is to detoxify or break down other compounds that may enter the body. This they do through chemical reactions that result in benign substances—oxygen and water.


Material Consequences

163

“Our catalysts,” he says of the ligands his lab has been producing, which are modeled on the liver’s detoxification process, “are about 1 percent of the size of natural enzymes, but they’re designed to do the same thing. . . . We created equivalent molecules in the laboratory—catalysts that will break larger compounds down to elemental levels.” Collins’s lab has produced more than twenty TAML ligands so far, Collins tells me. Each one conveys different properties to iron and each creates specific properties that can interact—like enzymes—with specific pollutants to break them down. These compounds can be used to degrade contaminants that are not removed from municipal water supplies by wastewater treatment plants. Other TAML peroxide compounds remove the dark colors associated with paper production and in the process eliminate the need for and smell associated with organochlorines—that distinctive pulp-mill tang. The fact that these compounds can also prevent dye transfer, Collins explains, means there could be a large benefit in adding small amounts of these catalysts to commercial laundry detergent. If you don’t have to worry about dye transfer, there’s no reason to separate colors from whites. This ultimately means fewer laundry loads—particularly for commercial laundries but also in the textile industry—and thus less use of both energy and water. These catalysts can destroy most oxidizable things—compounds that can be broken down chemically with oxygen—in water so they are “revolutionary in terms of what it can do for cleaning water,” says Collins of his TAML compounds. “They’re not commercially available yet,” says Collins, “but we’ve started to make commercial quantities. We have done fairly major field trials—and done detailed mechanistic studies in the lab. But from an environmental point of view, I don’t want to have any nasty surprises further down the road, so we’re very careful about watching the degradation all they way down, keeping a close eye on the toxicity as we go,” he says. The effort to eliminate hazardous chemicals “will be no easy ride,” Collins wrote in an op-ed for the Pittsburgh Post-Gazette in 2008. “We have been part of the way before, but we have ceded much of the landscape to


164

CHASING MOL ECU LES

industry ‘highwaymen’ who would prefer to keep the status quo with toxic chemicals to protect their bottom line.” What we need to do, he says, is effectively regulate chemicals that impair development and require that all ingredients on products be listed.2 “The right thing is starting to happen throughout the United States and the world on this,” says Collins, alluding to growing awareness and restriction of some of these persistent and pervasive compounds. “But it’s very contentious,” he says. Collins, for one, is not shying away from this engagement. “Green chemistry,” he says in his public lectures, “has to be, to some extent, a contact sport,” because, as he points out emphatically, “our civilization is not sustainable as currently configured.”

✣✣✣ To find out what’s going on where this contact sport of green chemistry is being played, I went to the eleventh annual Green Chemistry & Engineering Conference sponsored by the American Chemical Society, held at the Capital Hilton in Washington, D.C., over several hot, steamy days in late June 2007. Among the hundreds of attendees were representatives of major chemical and pharmaceutical companies, agribusiness conglomerates, NASA, the U.S. Defense Department, academic scientists, consultants and business representatives from around the world, and some nongovernmental organizations that promote sustainable production policies. The PowerPoint presentations were filled with chemical equations and molecular diagrams but the conference atmosphere was oriented more toward business than research science or policy. When I think about this, it makes sense. For what Paul Anastas, Terry Collins, John Warner, and their colleagues are essentially trying to do is sell the petrochemical giants of this world not just a whole new product line but a whole new way of conducting business and measuring product success. And what makes this sales job especially challenging is that for green chemistry to be truly successful, it will mean a shift away from chemical products that, by all historical corporate standards, have performed admirably and generated profits for years—in some cases for generations.


Material Consequences

165

“It’s in our best interest to provide leadership toward health and sustainability,” says Brad Thompson, president of the plywood and veneer division of Columbia Forest Products at the conference’s opening plenary session. According to their calculations, Columbia Forest Products, the largest plywood manufacturer in North America, conversion of all of their plants from producing a layered plywood that uses a urea-formaldehyde-based adhesive to one with a soy-based adhesive would reduce operating emissions here by 50 to 90 percent. It would also save the company money. “Who would have thought that one could change the hardwoodplywood industry with an adhesive,” says Thompson. This revolutionary soy-based adhesive was developed by Dr. Kaichang Li, a research scientist at Oregon State University in collaboration with the Hercules Company, a chemical manufacturer. The adhesive was considered innovative and environmentally friendly enough to win a 2007 Presidential Green Chemistry Challenge Award. The soy-flour-based adhesive, according to Dr. Li, was developed to mimic the workings of the liquid protein that mussels use to attach themselves to rocks. Li analyzed the proteins in both mussel enzymes and soy and developed a process that would render soy proteins capable of the mussel proteins’ adhesive properties. This product, called PureBond, is now being produced exclusively by Hercules and used by Columbia Forest Products. “From a business perspective it’s always in our best interest to have healthy customers,” says Thompson. “Green chemistry is just plain good business.” The soy-based adhesive, Thompson points out, now costs less than the formaldehyde adhesive. What he doesn’t mention, I think to myself as I listen to the presentation, are the considerable environmental and health benefits of formally discontinuing use of formaldehyde adhesives in plywood and veneer products altogether and legally barring these wood products in the United States as in other countries. It was, after all, the embalming-fluid levels of formaldehyde, a probable carcinogen, that emanated from paneling in the notorious New Orleans FEMA mobile homes and travel trailers, that prompted lawsuits from families who had


166

CHASING MOL ECU LES

been exposed. Why is formaldehyde still in use anywhere, I wonder, if this effective, cost-efficient, and nontoxic alternative that Thompson is using is available? The soy being used for the commercial production of Dr. Li’s musselmimicking adhesive is supplied by Cargill, the agribusiness giant. At the conference, Ron Christiansen, Cargill’s corporate vice president and chief technology officer, tells us that his company’s business “starts at the farm, not at the oil field.” Cargill, he says, has a great interest in “adding value to what farmers grow,” which means looking for new markets, products, and applications. Among these are plastics. Cargill has apparently been supplying the raw material for corn-based plastics since the late 1980s and is now joint owner, with the Japanese company Teijin, of the biopolymer manufacturer NatureWorks, which manufactures plastics that are based on polylactic acid (PLA) made from dextrose or corn sugar.3 This corn-based and biodegradable polymer can now be found as beverage cups, food containers, and fibers that go into both paper and textile products. The versatile polymer is also being used to package batteries and to make clear plastic containers for takeout food, egg cartons, coffee bags, beverage bottles, and what the company calls “tableware.” NatureWorks has customers all around the world, including some large producers and chains like Del Monte Fresh, Subway, 7-Eleven, and Green Mountain Coffee Roasters. There are currently dozens of textile manufacturers listed on NatureWorks’s website as suppliers of products using the PLA-based fibers sold under the brand name Ingeo. In addition to these corn-based polymers, Cargill is also making bio-based flexible foam products for use in upholstery, bedding, flooring, and car seats, to mention but a few applications. These plastics can be recycled and, according to NatureWorks, reusable PLA can be recovered from used PLA plastics. While they are biodegradable, these plastics won’t simply dissolve if left out in the rain and apparently do require some real heat and humidity to break down in composting—more than is probably generated by the average backyard compost pile in northern climates.


Material Consequences

167

“The early attempts at bio-based flexible foam literally stunk,” says Christiansen. “They smelled bad, like burnt popcorn.” (This was true of early soy-based inks as well, which gave off an odor of faintly rancid cooking oil.) But the odor issue was resolved and the result, he claims, is a product that performs favorably compared to petroleum-based products. “Only by creating products that work as well as those based on petroleum will green chemistry succeed,” says Christiansen. And, he notes, “profitability needs to be there if we’re going to be there too.” Many companies have begun to make additional kinds of bio-based flexible foams—the materials that go into flooring tiles and floor coverings, carpet backing, insulation, upholstery fabrics and padding, and the kinds of moldable plastics used in vehicles, to name just a few examples. What gives these materials their distinctive and varying flexibility are air cells. If you look at a cross-section of any of these materials you’ll find tiny—or not so tiny, depending on the foam—cells (as you’d see in a slice of celery or tomato skin) stacked like bricks. The plantbased material for these flexible foam—and other plastic—products may be corn, canola, sunflower seeds, soy, or another bean (castor beans, for example). These engineered plant materials are also being made into rigid plastics, fibers, detergents, and the emulsions that go into cosmetics and personal care products—what DuPont calls “functional fluids.” Cargill, Dow, Archer Daniels Midland, DuPont, BASF, and other big chemical and agribusiness companies are all involved, as are many smaller companies along with manufacturers of products into which these foams and fluids are incorporated. The list of companies designing and making bio-based plastics is now so long it’s not feasible to name them all, but the point to take away is that nonpetroleum plastics have now entered the product mainstream and are no longer aimed solely at stereotypically eco-conscious consumers. Virtually all the major chemical and pharmaceutical manufacturers say they have now initiated lines of synthetics based on renewable (nonpetroleum) materials. Dow, for example, has announced an antifreeze for boats and recreational vehicles based on food-grade propylene glycol (a


168

CHASING MOL ECU LES

petroleum product, but one with very low toxicity) and plant-based rather than petroleum-based ingredients that will be sold at Wal-Mart. One of the big hurdles to be overcome in the “greening” of the world of flexible foam is the process that turns a liquid oil—vegetable or otherwise—into the final material. What and who makes this happen are, in industry lingo, called foamers. On its own, no oil—soy, corn, canola, or any other—will magically puff up and create a springy, flexible foam. To achieve the desired cellular structure, a foaming agent is required in addition to the base material. The precise combination of material plus foaming agent plus process, which involves disassembling and reassembling the vegetable oil molecules, is generally proprietary to each chemical manufacturer. Historically isocyanates, hydrochlorofluorocarbons, and chlorofluorocarbons have been used along with other petroleum derivatives. As HCFCs and CFCs were phased out because of their ozone-depletion properties, isocyanates and other petrochemical foaming agents have dominated the market. Isocyanates exist in many formulas. Among those used in foaming applications include toluene diisocyanate and methyl isocyanate. Both are serious respiratory toxicants and skin irritants, and both are suspected carcinogens. Repeated exposure can lead to asthma-like attacks, pneumonia symptoms, and other breathing problems and in some instances can cause more serious illness or death. In fact, the chemical involved in the deadly 1984 industrial accident in Bhopal, India, was methyl isocyanate.4 Methyl isocyanate is also the chemical involved in the deadly August 2008 explosion at the Bayer CropScience plant in West Virginia, a plant with a history of chemical leaks. The current crop of bio-based, flexible foam products are being made with a range of foaming agents, which vary by final product application. Some are being made with carbon dioxide, nitrogen, and water-based agents. But others are manufactured using well-known high-hazard chemicals, including isocyanates and other petrochemicals. The dilemma of “greening” the entire production process, however, is not confined to flexible foams. Currently, there are hazardous chemicals—reagents and


Material Consequences

169

solvents—being used to process any number of renewable, bio-based materials. Pharmaceuticals are among these products, yet they are another area of chemical manufacturing in which green chemistry is beginning to make a few inroads. The resources at present that go into making pharmaceuticals and the waste created in manufacturing them far exceed the physical size and quantity of the finished product (and thereby contributes to high drug costs), according to Joe Fortunak, another presenter at the Green Chemistry conference. “Big pharma’s job,” says Fortunak, a professor of pharmaceutical sciences at Howard University who also works with the University of Alabama’s Center for Green Manufacturing, “is to create new medicines. And to make what’s essential as accessible as possible while making it sustainable. Currently,” he says, “about 1 billion people in the world have real access to medicine. Roughly 5.5 billion of the world’s 6.5 billion people have inadequate access to medicine.” The huge discrepancy stems in part, he suggests, from not only how pharmaceuticals are made but also where they are made. In many African countries, for example, it’s too expensive to buy drugs from big pharma: “Sustainable implies equitable access to the fruits of technology. This is why green chemistry has a tremendous opportunity to be accepted in Africa and other places where manufacturing capacity is being built for the first time.” I will hear this from other green chemistry advocates as well—that interest in green chemistry curricula is currently much higher in what are traditionally considered developing countries than it is yet in the United States. Fortunak is here to talk about the potential of a promising plant-based antimalarial drug. This drug and the treatment associated with it is known as artemisinin combination therapy (artemisinin on its own does not work as an antimalarial drug) or ACT. The base ingredient of this treatment, artemisinin, can be extracted from a plant called Artemsisia annua, also known as sweet wormwood. The process of extracting artemisinin from plants in a pharmaceutically useful form is slow, so this compound is also now being produced synthetically. What typically


170

CHASING MOL ECU LES

happens is that the raw material is exported for processing and manufacture and the final product—the antimalarial drug—is sold back into the low-income countries where the antimalarials are needed, usually at a premium price. What should happen instead, Fortunak argues, is for the drug to be manufactured in the countries where it will be used. The economic, environmental, and social benefits are many and obvious. One of the goals, says Fortunak, is to eventually simplify synthesis of the drug. “The simpler the synthesis, the smaller the investment [in training, infrastructure, materials], which increases the capacity to expand manufacturing,” and to locate production close to where it’s most needed. Green chemistry could go a long way toward achieving both goals. One of the particular steps in the typical pharmaceutical production process most in need of both simplification and a green chemistry transformation—and which is currently a high-cost step both financially and environmentally—is the use of non-water-based solvents and reagents, compounds that are usually petrochemical derivatives, highly toxic, and expensive. Efforts are underway to use forms of carbon dioxide, titanium dioxides, and other oxygen- and water-based compounds to produce desired results, but this part of the metaphorical hazardous and petrochemical aircraft carrier has yet to make a full turn toward green chemistry.

✣✣✣ Two other areas where green chemistry researchers are hard at work— cosmetics and electronics—may seem poles apart, but they have some surprising similarities—as illustrated by the work of Amy Cannon, an energetic researcher with the Warner Babcock Institute for Green Chemistry and cofounder and executive director of its associated educational nonprofit, the BeyondBenign Foundation. Cannon’s work focuses on electronics, specifically formulating environmentally benign chemicals for use in semiconductor and solar-cell production. Development of “green” chemicals for cosmetics, however, has emerged as a sideline thanks to the curious gallery of overlapping polymer applications—and the musings of inquisitive graduate students.


Material Consequences

171

Cannon holds the world’s first PhD in green chemistry, which she earned at the University of Massachusetts–Lowell where she worked with John Warner, to whom she is now married. When I first met Cannon in the fall of 2006, I immediately thought how different my college science education might have been if molecular structure had been explained to me by someone with her sunny smile, ebullient enthusiasm, and ability to describe complex chemical materials and processes in easyto-visualize contextual and narrative terms. While her manner is easy, Cannon’s chemistry is serious. She has worked as a consultant for Rohm and Haas, a company that makes specialty chemicals for the electronics industry, and as an analytic chemist for the Gillette Company. She’s also taught green chemistry at the University of Massachusetts and now, in addition to her work with Warner Babcock, coordinates the new green chemistry program at Cambridge College in Lawrence, Massachusetts. “There are,” says Cannon, “amazing similarities between the materials used in electronics and cosmetics.” Both involve compounds that create flexible, waterproof films and coatings. Historically, the chemicals used to create materials with these properties have been hazardous. Semiconductor photoresists have often involved the use of the perfluorinated chemicals PFOS and PFOA, while many cosmetics have been formulated with polymers that use phthalates to achieve the flexible films that make products like nail polish and mascara adhere to and bend with curved surfaces. Instead of relying on materials with adverse environmental and health impacts to create these properties, Cannon asks: “Can we design a better molecule to begin with?” “Can we build in the desired function [to the molecule we’re designing] but decrease the number of overall molecules we need and so cut down on waste and process steps?” she asks. This idea is at the heart of green chemistry. Instead of forcing molecules to do something they wouldn’t on their own, green chemistry tries to build function into the new molecule’s design, Cannon explains. The chemical films typically used as photoresist materials in semiconductor production are designed to work on a substrate—the surface onto which the circuitry pattern is etched—silicon wafers, for example. After a


172

CHASING MOL ECU LES

pattern is carved onto the surface by exposing the photoresist polymer to light, the resist is removed with a solvent, leaving the etched pattern behind. Traditionally, this etching has been done with what Cannon describes as “very active, very toxic, very small molecules” and through a multistep process that uses lots of hazardous solvents. Chemicals in semiconductor production have, over the years, exposed thousands of workers to toxic chemicals—sometimes with tragic results—and have resulted in millions of pounds of hazardous chemical effluent and emissions. Cannon simply asks: “Can we do this better?” In one of her lectures, Cannon explains how she worked on special technical materials used in laptops and other portable high-tech electronics. “We had all sorts of design criteria for new products,” she says. “Things had to have a certain solubility, a specific melting point, certain mechanical properties—strength and flexibility—a certain refractive index, and a specific surface tension. We’d talk about all of this in great detail then I’d ask, ‘What about the toxicity? What about the environmental impact?’ We were just told to look for performance criteria without any reference to toxicity or environmental impact.” Safety, says Cannon, can be thought of as just another property of a material. The materials we design, she says, must perform as well as or better than the alternatives, be environmentally benign, and be more economical than alternatives. One of these goals without the others is not really what she’s after. As Cannon explains it, what green chemistry is trying to do is design molecules that work the way they do in nature. This is how the soy-based adhesive that can replace formaldehyde in plywood came about. This is also where some of the allure of nanotechnology lies—in creating new molecules by taking advantage of natural attractions and chemical reactions, exploiting their ability to assemble, disassemble, and reassemble without the use of excess force or heat, either applied physically or in the form of chemical reagent. To achieve this in the realm of semiconductor production—a complex, multistep process that now involves dozens of hazardous chemicals—would be revolutionary. “I always wanted to save the world,” Cannon says about her path to green chemistry. “We heard about all the environmental problems,” she


Material Consequences

173

says, “but far less about solutions.” In college her focus in chemistry was on environmental issues and that included an analysis of mercury in fish—mercury pollution being a problem throughout New England, where she grew up. Later, in graduate school, she ended up working on her master’s degree as part of John Warner’s new green chemistry group at the University of Massachusetts–Lowell. And while not one of the best places to work if, as Cannon put it, “you have an environmental conscience,” she’s worked as an analytical chemist creating specialty materials for major manufacturing companies, experience she values. Some of the polymers Cannon has been working with as a green chemist were discovered by trying to create a material whose molecular design enables it to, in effect, be multifunctional—and thereby eliminate extraneous steps and extra materials. Among these materials is a polymer based on thymine, a constituent of the compounds that make up the nucleic acids in DNA. Thymine turns out to be even more multipurpose than she and her colleagues initially imagined. Not only can it be used to make semiconductors, it can also be used in a beauty salon. Thymine is a big molecule and what Cannon calls a “preformed polymer.” If you shine light on it, it crosslinks—connects or bonds—and creates a polymer. This is how thymine often works in the human body. If UV light hits thymine molecules, it produces a kink in the DNA that triggers what Cannon calls “a little enzyme repair mechanism.” This reaction sends a signal that says some crosslinking needs to be done. (The inspiration for this choice of molecule, Cannon explains, came from the UV light treatment John Warner underwent for psoriasis in which faulty cells are prompted to repair themselves through light exposure.) She and her colleagues created a thymine compound with molecular sites that are available for polymerization—or crosslinking. They were able to build a variety of polymers this way, including some that are water soluble. They also discovered that the addition of another enzyme (one found in E. coli) can “unzip” the polymer so that the original thymine building blocks are recoverable and ostensibly available for reuse. The thymine polymer Cannon and her colleague created turns out to be quite versatile. It can be tinted with food color dye, giving it possibilities for consumer product applications. These polymers can be turned


174

CHASING MOL ECU LES

into time-release systems by controlling how densely polymerized the material is and thus can be used in drug delivery or to encapsulate pharmaceuticals that need to dissolve over a specified time period. The thymine polymer can also be given a coating that resists bacteria. “Bacteria won’t grow on this surface,” says Cannon, eliminating the need for antibacterial additives (many currently used have proven to be persistent pollutants). It can also be used in the application for which it was originally developed, in photoresists for electronics. It also turns out that this polymer can literally make your hair curl. As Cannon tells the story, she and John Warner were having lunch with some students and conversation somehow got around to discussing the viscosity of Dippity Do hair gel. This prompted the students to return to the lab, put the thymine polymer on some hair, curl the strands, and expose them to UV light. The result: the curls stayed put. The experiment found its way into a PowerPoint presentation, which Cannon and Warner happened to mention while meeting with a corporation about a technical commercial application for the thymine cross-polymer. The company— Cannon doesn’t name it—happens to have a personal care products division and the executives were deeply impressed with the prospect of a nontoxic permanent wave product. This might seem like a joke, but as Cannon explains, the current methods used for chemically perming and straightening hair involve highly toxic substances and processes. Cosmetics may seem trivial in the great scheme of things, but they are a widespread source of multiple chemical exposures for millions of people, including pregnant women. Cosmetics, Cannon notes, are not overseen by either the FDA or the EPA, so their overall product safety is actually poorly regulated. A nontoxic perm might not be on a par with curing cancer but it would change the lives of the thousands of women who work with and use these cosmetics products. Among such products are nail polishes. Historically nail polish has been formulated with toxic ingredients, among them formaldehyde, toluene, phthalates, and acetone. The result, particularly for beauty salon workers, is substantial chemical exposures. The University of Massachusetts– Lowell’s Toxics Use Reduction Institute (TURI) has been working with


Material Consequences

175

local beauty and nail salons to reduce use of hazardous materials and thereby improve conditions for workers as well as customers. “This seems to be a miracle product,” Cannon says laughingly of the thymine polymer. “So we said, why not try it on our nails.” Because the thymine compound binds easily to food-grade dyes, and because nail polishes are basically photoresists, Cannon and colleagues experimented by attaching a blue-green dye to the polymer. They then painted it on a nailsubstitute surface and exposed it to light. The green water-soluble polymer successfully coated the surface, demonstrating that this compound has the potential to create a nontoxic nail polish. Not only is the potential polish nontoxic but it can be removed with an enzyme wash that’s also nontoxic rather that the hazardous acetone and other volatile organic solvents typically used to remove nail polish. And since the polymer takes dye and works on hair, why not try developing a nontoxic hair dye, suggests Cannon. “Although you may not want blue hair,” she laughs. Cannon and her group are working to make the polymer work at light wave lengths that are safer for people than those used in the initial experiments, something that would extend the polymer’s application. They are also looking into using a bio-based, biodegradable substrate, including one based on potatoes, to “get away from the politics of corn,” says Cannon. For Cannon’s lab, potatoes also have the advantage of being relatively local—initiatives are now underway in Maine to develop polylactic acid using locally grown potatoes. Ultimately, what makes this thymine polymer so innovative is that rather than offering a drop-in substitution for an objectionable ingredient, it in effect rethinks the entire concept of a functional material by linking synthesis and function. Rather than relying on additives to make a material perform the desired function, the material itself can be designed to do everything needed, and in this case it can do so without being toxic or persistent. Yet because nearly everything currently on cosmetics counters, drugstore shelves, and in electronics factories relies on the petrochemicals we’ve been using for decades, it may take a while— and more consumer demand—before materials like Amy Cannon’s become the norm.


176

CHASING MOL ECU LES

✣✣✣ To get an idea of how interest in green chemistry might be moving out of the laboratory and into the marketplace, I sat down over coffee in Somerville, Massachusetts, with Mark Rossi, research director of Clean Production Action, a nonprofit that advocates for environmentally preferable, nontoxic products. Because the word “green” is now slapped on just about any product imaginable and there’s no real way to know exactly what it means, Clean Production Action has been working to create what Rossi calls a “green screen”: a way to assess whether a chemical material is environmentally benign and without adverse human health impacts. Astonishingly enough, despite acres worth of databases maintained by government agencies, industries, academic sources, and nongovernmental organizations that are available to anyone with access to a computer and an Internet search engine, this information is still frustratingly obscure, incomplete, confusing, and sometimes misleading. The different U.S. agencies that deal with chemical safety issues— among them the EPA, the Centers for Disease Control, OSHA, and NIOSH—frequently offer different toxicity assessments for the same substance. Not only do safety standards (safe exposure levels) differ from agency to agency—and from state to state—but the assessment of health effects can also differ as well. Some of these databases are astonishingly out of date, in some cases by decades.5 Different databases under the same agency umbrella sometimes contain different data; some contain few data at all even about substances that are now well studied. The upshot is that it’s extremely difficult to know which of the many supposedly unbiased government sources offers the most reliable or currently accurate information, leaving the criteria of “safe” very much in limbo. A vast database that is supposed to help provide chemical users— professional rather than casual public users—with information on the thousands of chemicals produced in or imported to the United States at high commercial volume is the High Production Volume Information System (HPVIS). Its current iteration was rolled out in December 2006, just about the same time that the European Union was passing its new chemical safety legislation known as REACH (Registration Evaluation,


Material Consequences

177

and Authorization of Chemicals). In a number of ways, the debate around what the U.S. HPVIS program does or does not do, and why, is the backstory to the work of Rossi’s group and others advocating for U.S. chemical policy reform. The limited usefulness of the HPVIS program and other current chemical safety databases for improving environmental health also illustrate, I think, why continuing to address chemical safety on a chemical-by-chemical basis without addressing chemical and materials safety from the molecular design and hazard-removal perspective, we’re unlikely to solve current pollution problems. The details of the HPVIS program—which was created with substantial input from the American Chemistry Council, the American Petroleum Institute and, at the other end of the interest-group spectrum, the Environmental Defense Fund—are fairly mind-numbing. But the bottom line is that participation in the registry is voluntary and lacks any regulatory or enforcement power. Although the EPA does issue nonbinding “challenges” or calls for information, it relies on information provided by manufacturers (rather than on third-party testing) and it requires no new testing of chemicals if data for the requested environmental and health effects are available.6 One so-called “endpoint” of toxicity not included, however, is endocrine disruption, and as Richard Dennison, senior scientist for the Environmental Defense Fund has pointed out, the environmental and health testing effects currently in the HPVIS program were developed twenty or more years ago.7 The requirements of the HPVIS program differ notably from those of the European Union’s REACH legislation. While the American program is voluntary, REACH is mandatory. Under REACH, the toxicity data that must be provided on all chemicals produced or used at volumes over 1 metric ton (2,200 pounds) annually include endocrine disruption as one of its testing criteria. REACH requires manufacturers to prepare substitution plans for substances deemed of very high toxicity concern. It also has a provision to cover what it calls “substances in articles,” chemicals that can be expected to be released from consumer products—synthetic fragrances, for example. (In contrast, some U.S. chemical manufacturers talk about measuring “cradle-to-gate” impacts of their products, meaning


178

CHASING MOL ECU LES

that their assessment ends when the product leaves the factory, in effect ignoring impacts—possibly harmful—that may arise during normal product use, disposal, or recycling.) Chemical products not registered with the REACH program, furthermore, cannot be exported to the EU. As of 2008, REACH was expected to involve registration of about 30,000 chemicals and eventually to result in the ban or restriction of some 1,500 chemicals. The HPVIS program also raises the questions of quantity of information versus quality of information and how effectively this program can be used to protect environmental health. “If we wait until a chemical reaches high production volume to trigger this kind of data, it means the substance is already a very important one to the economy,” says Michael Wilson, a research scientist in the Center for Occupational and Environmental Health at the University of California Berkeley’s School of Public Health.8 This also means that in addition to substantial financial investments in such materials, there will have been ample opportunity for people to be exposed to this substance either during manufacturing or at any other point in the product’s life. But even the amount of documentation seems subject to debate, as evidenced by the statement from the American Chemistry Council in response to passage of REACH: “The ACC believes that the European Commission’s . . . [REACH] proposal seeks considerably more information than required by regulatory authorities to assure that chemicals are produced and used as safely as possible. REACH is unworkable, impractical, and costly—and will not provide the health and environmental benefits envisioned by its creators.”9 REACH does represent a significant step toward requiring manufacturers to provide toxicity information on their products and ensure their safety prior to commercialization. It also has the regulatory power to restrict them under a system that puts the burden of proof of safety on the manufacturer rather than the burden of proving harm on the consumer as does the United States’s Toxic Substances Control Act (TSCA). “From where we sit, the public is increasingly questioning the safety of chemicals in the products they use and purchase,” said Gary Gulka of the Vermont Department of Environmental Conservation at that 2006


Material Consequences

179

EPA meeting, describing increasing state interest in environmentally preferable purchasing and green chemistry. This is where Mark Rossi’s Clean Production Action group comes in. “Right now,” says Rossi, “data cut both ways for the environmental community. We need data on both existing chemicals and safer substitutes, but the lack of either can be an argument for continuing with the use of a known substance.” And, Rossi adds, “Just because it’s ‘natural’ doesn’t mean it’s not toxic.”

✣✣✣ Our current approach to environmental health protection is not holistic and still centers around what can be achieved with individual chemical bans or restrictions. This focus continues to be the typical of regulations at the state, local, and federal levels in the United States. This holds true for the 2008 Consumer Product Safety Improvement Act, which while covering a range of products—including children’s products in which it restricts lead and half a dozen phthalates—does so by focusing on specific chemicals rather than classes of comparable chemicals and includes no programmatic provisions for developing alternative products. A few exceptions to this approach have recently emerged, and green chemistry advocates hope these will prompt more systemic improvements in materials safety and environmental health protection. Among the most notable of these exceptions was a program announced in May 2007 by the state of California. Called the “Green Chemistry Initiative,” the program aims to move beyond chemical-by-chemical restrictions and shift to a more proactive pollution prevention policy by solving toxicity problems at the design stage and by replacing hazardous substances currently in use with safer alternatives. Policy advocates at the nonprofit organization Environment California are generally supportive of the initiate’s aims and first steps, but point out that the program currently lacks sufficient regulatory provisions and that to be effective it needs information about chemical products that manufacturers are not yet required to disclose. A similarly notable exception is the Michigan Green Chemistry Program created by executive order by Governor Jennifer Granholm in 2006. Its aim is to coordinate research and


180

CHASING MOL ECU LES

development of new environmentally benign chemicals and products within Michigan and is seen as a way to help revitalize the state’s manufacturing industries. Although not explicitly a green chemistry bill, in January 2008, the Massachusetts state senate passed “An Act for a Healthy Massachusetts: Safer Alternatives to Toxic Chemicals.” Also known simply as the Safer Alternatives bill, the law would set up a program to identify chemicals that pose a “significant risk to human health or the environment” and to make recommendations for safe alternatives. As of April 2009, the bill had yet to pass the legislature in full. Other states, including Connecticut, Maine, Illinois, Michigan, Vermont, Alaska, and New York, have also been considering legislation that goes beyond chemical-by-chemical restrictions, either by promoting green chemistry efforts, advancing environmentally preferable purchasing programs, or with bills that target toxics in children’s products. A great many more states have also been working on bills that would restrict the use of certain specific chemicals such as bisphenol A, certain brominated flame retardants, phthalates, and perfluorinated chemicals along with various pesticides, lead, and mercury. Most of the legislation that would bar use of specific chemicals, however, has been opposed by chemical industries, as have the more comprehensive bills. Ironically, some of the chemical manufacturing companies that have spent money to lobby against these policies are at the same time investing substantially in efforts to develop environmentally preferable alternatives to their own hazardous products. Despite the piecemeal and often contentious nature of regulation and legislation, many manufacturing companies whose products depend on synthetic chemicals are shifting their chemical choices to nontoxics well ahead of—and with a speed that exceeds—passage of any regulation or legislation, notes Lara Sutherland of the nonprofit Health Care Without Harm. Such efforts are going on worldwide both in commercial, industrial and academic research with applications that range from energy production to textiles and packaging. While discontinuing the use of one material in favor of another does not necessarily constitute green chemistry, the need to do so—for exam-


Material Consequences

181

ple, when prompted by restriction of a hazardous material—may well prompt innovation. Being able to offer a new product, especially one that will work as well as or better than the existing one, rather than an empty shelf when a new regulation goes into effect is the challenge—and why manufacturers are so keen to stay well ahead of the regulatory curve. What’s been happening with polyvinyl chloride (PVC) is a case in point. PVC contains phthalates—the plasticizers that are coming under increasing scrutiny and regulation because of their adverse health impacts—and when burned, as it will be if disposed of in municipal incinerators or many rudimentary landfills or dumps, PVC releases persistent and extremely toxic dioxins and furans. So while PVC is not legally restricted anywhere, many manufacturers, alert to growing public and scientific concern, are moving to other materials. There already are some other packaging materials that perform comparably to PVC, but many companies have also begun to shift away from PVC as a structural material. Electronics companies, including HP, Dell, and Lenovo have 2009 as the date by which PVC will eliminated from packaging and products. Apple has already discontinued PVC packaging and other applications with the exception of some external wiring, and Microsoft stopped using PVC packaging in 2005. It’s important to note that none of these companies actually makes PVC, packaging materials, or wire cables, so this kind of product redesign involves people up and down the supply chain. Nike also has a PVC phaseout underway. Meanwhile, on the retail side of the equation, Wal-Mart, Target, Sears, Kmart, Toys “R” Us, along with toy manufacturers Mattel and Hasbro, among others, are discontinuing sales of toys made with PVC.10 Sony has gone even further in altering its plastics use. Having eliminated PVC from packaging and product cases, it has begun to use biobased plastics for its electronics. For example, the company is already using castor-oil based polymers in cameras, DVD players, and its Walkman MP3 players among other products that include a “smart” card (notable simply because there are so many of these pieces of plastic about). When it comes to finding alternatives to PVC in applications other than packaging—such as in screen-printing, a technique used to apply team insignia on sports gear, for example—eliminating PVC means not


182

CHASING MOL ECU LES

just finding an alternative material with comparable performance, but redesigning the entire process. As a Nike executive explained to me, no team wants to be the one on the field wearing shirts with their name peeling off. Eric Beckman, professor of engineering at the University of Pittsburgh, characterizes this performance problem by noting that people want “things that are both stable and unstable at the same time.” We want the durability provided by a PVC but for environmental reasons, want materials that will biodegrade. Nike has also been rethinking its use of solvents and adhesives. Those in longest, widest use often involve volatile organic compounds (VOCs), typically substances based on benzene and chlorine chemistry. Phasing out these toxics is not a straightforward substitution. The switch to aqueous, non-VOC-based glues and solvents has in some cases meant changing the whole production process. With one product, I was told, the old VOC-based adhesive dried quickly. Using a water-based adhesive meant increasing the time the pieces had to be physically held until dry, increasing the overall production time. “Where you were making a hundred pieces in a set period of time, you might now be making seventy,” I was told. To meet the previous production rate, other parts of the process or product would also have to be redesigned. But such changes are achievable. For example, in 2004 Nike was using less than 5 percent of the VOCs per pair of shoes manufactured than it used in 1995. Similarly, the majority of the rubber Nike now uses in its footwear is made with only 4 percent of the chemicals it had identified as used in these materials prior to 1994.11 “We are at an inflection point of unprecedented historical development,” Mark Wysong, CEO of Dolphin Safe Source, a consulting company that works with businesses—including Fortune 500 companies— to reduce their use of toxic chemicals, said optimistically in 2007. A survey of 4,000 professionals had found that about one-third considered reduction of toxic chemicals among the top three the most important environmental issues, ranking behind only global warming and energy conservation. “Today there are more than 100,000 such chemicals used in more than 3 million products. There were almost none in the early 1930s,” notes


Material Consequences

183

Wysong. He also points out that many separate chemicals are often used where far fewer would work as well. This redundancy means that it may actually be easier than a company thinks to reduce overall chemical use. In recent years, says Wysong, General Motors and Delta Airlines each reported about a 30 percent reduction in their chemical use, while WalMart had eliminated about thirty hazardous chemicals from its supply chain. He also notes that to assess its chemical use a company often needs to change perspective. As an example, Wysong described a company that uses one chemical glass cleaner. It’s only one chemical, so overall chemical exposure to the company is small. But the cleaner is used every day by the same two janitors; for those workers, the exposure is huge.

✣✣✣ “We have one goal,” says Earl Brown, a lawyer with the American Center for International Labor Solidarity, “and that is to drive up the cost of human life and human labor in Asia, where it is simply far too cheap.” I’m sitting in a room with representatives of organizations based in Thailand, India, China, the Philippines, Taiwan, Vietnam, California, Indonesia, Nepal, Bangladesh, Cambodia, Japan, Korea, Malaysia, France, Hong Kong, Australia, and Canada. Their organizations work for the people whom the world regularly forgets—the people behind the pallet loads and shipping containers filled with the cheap goods we all gobble up. These are the people at the far end of the supply chain whose lives could be vastly improved if green chemistry were the universal norm. We are shown pictures of men in Indonesia handling powdered asbestos, putting it into name-brand motorcycle brake-pads that will be sold all around the world. These men work without protective gloves or masks. In mines, quarries, and other job sites around Asia workers continue to breathe silica dust and suffer from lung disease. I meet colleagues of women who make cadmium-based batteries. Some of these women have suffered miscarriages while others have had babies with birth defects and babies who are always ill. A Malaysian woman who works in the electronics industry describes how women there in their thirties are being given voluntary retirement packages, in her view to encourage the already high worker turnover rate. Many of her coworkers over the


184

CHASING MOL ECU LES

years have suffered chronic illnesses but it’s been impossible to link them definitively to anything in the factories because of the high turnover rate and lack of monitoring. We also hear from accident victims and their colleagues struggling to receive adequate compensation and medical coverage for their injuries. These stories are not “breaking news” but the lingering realities of the world’s far-flung global supply chain. It’s also a vivid illustration of how eliminating hazard will eliminate risk and avoid a litany of costly adverse impacts. Some of this work under hazardous labor conditions is going on not far from where this group of labor and environmental activists has gathered in the Kowloon district of Hong Kong. To get a glimpse, I take an hour’s high-speed ferry ride to Shekou, the eastern port for Shenzen, China, and now the third largest port in Asia. It’s late August, oppressively hot, bright, and humid. The ferry zips away from the skyscrapers lining Hong Kong’s shoreline, past steep green hills characteristic of the South China Sea, and past the new bridges with spans that look like pale piano wires or a professional egg-slicer. The Outlying Islands stretch out to the east in the hot, white-blue light. The ship traffic, some months before the worldwide recession sets in, across the open harbor waters of the Pearl River Delta looks like rush hour at Times Square. I’ve never seen so much working boat traffic. Barges piled high with semi-truck sized, cobalt blue and barn red shipping containers emblazoned with the names of freight giants—Haijin, Maersk, China Shipping, Zealand—cross paths with cranes, dredges, and ferries. In between are smaller boats that bring the word scow to mind, their wooden hulls rimmed with tire bumpers underneath three-sided cabins with shallow curved black roofs. On the shoreline of one harbor island, planted on a pile of big beige riprap rocks, in the broiling sun, is someone under an umbrella with a fishing line. This is one of the world’s most notoriously polluted river mouths. It’s here that effluent from the industries of Guangdong Province pools. The bulk of these factories are now farther inland, near Guangzhou and points north. But significant levels of perfluorinated chemicals, brominated flame retardants, bisphenol A, PCBs, and other dioxin-like chemi-


Material Consequences

185

cals have been found here as well as in people living and working throughout the region. Of course none of this is visible on the choppy surface of the celadon-green water or in the hazy air. But the acres of city-block-high piles of shipping containers attest to the volume of manufacturing. Edward Chan of Greenpeace China takes me on a tour of some Shenzhen neighborhoods filled with small factories. The city is a maze of new high-rises and dispiriting old industrial buildings and adjacent dormitorystyle apartment blocks. The few people out in the midday sun carry umbrellas for shade, even while cycling. We drive beneath the smokestacks of an asphalt plant and a glass factory. Black plumes of smoke dissipate into the haze. We veer away from the sleek buildings bearing the insignia of world-famous electronics brand names, away from clean plate glass windows and shopping plazas where office workers in white shirts and men in sweaty work clothes line up at ATMs. We head for gray warrens of cement-block buildings where the residences are distinguished from manufacturing facilities by laundry. Balconies are draped with blue and orange smocks, rows of gray limp T-shirts, and faded undershorts. A large portion of Shenzhen’s population are not official city residents, Edward tells me. They live here temporarily to work, hence the dormitorystyle housing. Edward and I get out and walk around underneath belching exhaust fans, dripping drainpipes, rattling metal vents, and open work areas, some barred by gates. “What are they making here?” I ask as we pass whirring machinery and look into dim interiors where workers in smocks and coveralls turn to eye us warily. “Plastic parts. Plastics. Pens and pencils. Boxes and envelopes. Plastics,” Edward tells me. Some of the exhaust fans and hose pipes face the balconies where laundry is drying only a few feet away. There are banners everywhere with big bright characters. Here and there I see “ISO1400” and “ROHS.” I ask Edward to translate. The signs proclaim compliance with environmental standards and honor the health and safety of the workers. The heat- and soot-laden air compares only to a New York City subway platform on the hottest of summer days. To


186

CHASING MOL ECU LES

spell us from the industrial exhaust, we walk along a city river. I look down at the water and hope I’m seeing the dirtiest river I’ll ever see. The surface is mottled black and oil-slick blue. There is raw sewage. There are dead and bloated rodents. There is food waste and plastic debris. And there is grimy water pouring from some sort of outfall drainage. Edward urges me to take a video of a particularly foul stream of effluent. Our driver, who’s wearing wraparound sunglasses and a black Nike T-shirt, and whose spotless black sedan suggests a clientele other than freelance journalists and environmental activists, seems perplexed by our tour route. At the station where we catch the bus back to Kowloon is a pair of trash cans—one yellow, one green—that say in English (and presumably Cantonese), “Love our homeland” and “Environmental protection.” That evening in Hong Kong, I take a walk down Nathan Road past dozens of electronics stores, shops filled with watches, trendy clothes, and accessories. To cool off I step into an air-conditioned department store where I find myself surrounded by costume jewelry and children’s toys, and I realize I’ll never look at small plastic objects the same way again. Looking at all this merchandise and thinking about all those hundreds of shipping containers, I think about what a daunting challenge it is to shift our current materials stream. Putting the merits of overflowing toy boxes and ephemeral accessories aside, I also think how people’s lives would change if making such things did not mean fouling rivers and damaging cells. Virtually every major company now has some sort of sustainability initiative in place. How deep these programs go, whether they address core products or stop at the low-hanging fruit, is a valid question. “Every step in this direction is a good thing,” says Brendan Condit of the Organic Exchange, an organization working to promote use of organic cotton as a way to decrease pesticide impacts worldwide. “You do what you can do, what you can sell the board on, and what you can make money on,” he tells me, recounting his organization’s efforts.12 I also think back to what Paul Anastas told me: “Perfect should not be the enemy of the excellent.” Many manufacturing companies—the cleaning-product giant SC Johnson, for example—have chosen to work along the lines of what Mark


Material Consequences

187

Rossi and colleagues at Clean Production Action call “green screens.” These programs identify priorities, both for chemicals targeted for elimination and reduction and for guiding principles. A number of cleaning products manufacturers—including SC Johnson—are also working with the U.S. EPA’s Design for the Environment program, which includes templates for screening two major categories of such product ingredients: surfactants (generally the agents that make things sudsy, soapy, foamy, and emollient) and solvents. This program is being used to develop a list of environmentally benign, nontoxic ingredients from which manufacturers can choose. According to the EPA, about 300 manufacturers are now participating in this program. SC Johnson says that moving to environmentally benign ingredients and products has not cost them money or business. Sony calls its hazardous chemical priority list, which focuses on chemicals used in both exterior and interior components along with packaging, a “green book.” As of April 2008, Sony’s Green Book included 18,000 monitored materials. In order to keep track of these materials, Sony must communicate this information and verify use of these substances with its suppliers, of which there are at least 4,000. Adopting the green chemistry principle of “atom economy”—using fewer different substances to achieve functional ends with the goal of zero waste and byproducts— would clearly simplify what a company like Sony has to do to monitor its materials. Another example of “green screening” programs comes from the largest manufacturer of modular carpeting, InterfaceFLOR, which has what it calls a “Mission Zero” initiative. This program targets waste, fibers, dyes, adhesives, and textile additives and sets the goal of eliminating any associated negative environmental impacts by 2020. Even to begin this process the company has had to “completely redesign processes and materials,” says Connie Hersh, InterfaceFLOR’s director of sustainable research chemicals and processes. In addition to setting out its own goals and designs, the company has had to collaborate with suppliers and obtain a complete disclosure of all product ingredients, something not typical of business. In short, says Hersh, InterfaceFLOR discovered that


188

CHASING MOL ECU LES

to complete its Mission Zero, the company “can’t make anything the way we’re making it now or with what we’re making it now.” Number 7 on the company’s list of what it calls the “seven fronts of sustainability” is “redesigning commerce.” An important source of support for environmentally benign product lines comes from consortiums of large volume purchasers that are pooling both buying power and information resources for these products. Health care institutions, for example, including many major hospital groups—some working through the organization Health Care Without Harm—are targeting fabrics and plastics in this way. Clearly, there is substantial demand from both institutional and individual consumers, but the definition of what’s toxic and what’s not keeps changing. Transparency, too, is still a huge issue. It’s still hard from the outside for a consumer to make a clear assessment of what’s actually in many manufactured and synthetic chemical products. What labels that proclaim “clean” or “nontoxic” really mean—or what an ingredient listed as “cross-polymer” or “fragrance” contains—is no more well defined than those of breakfast cereals that say, “natural” or “all natural.” Chemical regulations in the United States are designed to protect proprietary information, a stumbling block that must be overcome to adequately protect environmental and human health. This was illustrated vividly when I searched the documents the Food and Drug Administration has posted to document its 2008 assessment of bisphenol A. Included in these files were results of various polycarbonate food and beverage containers tested with different substances to measure bisphenol A leaching. Not only were the names of manufacturers of these containers redacted, but in several tests the name of the substance that caused the greatest leaching was blacked out. Clearly there has to be a way to protect public health and avoid scaring people with inconclusive test results without undermining commerce. As the green chemistry advocates would point out, if we eliminate the hazard to begin with—design materials so that we don’t have to quibble and worry endlessly about how much of a biologically active and toxic substance to which we’re willing to expose our children and ourselves—


Material Consequences

189

and agree that any such hazards are unacceptable, the issue of redacting files will go away along with the health risks. Right now green chemistry and our evolution to environmentally benign products is in an awkward dance between what we know we don’t want and what we would like. The list of chemicals of concern for environmental and health hazards continues to grow as does our knowledge about the impacts of those identified as hazardous and toxic—and we have barely begun to investigate the effects of mixtures of toxic substances, the type of exposure that’s a reality for virtually everyone alive today. “It’s hard to say we’ve gotten it all wrong,” says Terry Collins, “but the really important generation of green chemists is the next one.” Chemical manufacturers and environmental activists alike can agree that materials that biodegrade into environmentally benign constituents; that do not generate hazardous byproducts during the production process, useful product life, or in recycling and disposal; and that do not cause adverse impacts to human health, wildlife, or the environment at any point in their lifecycle, are ideal. Everyone can also agree that eliminating hazardous waste and adverse environmental and health impacts is ultimately good for business. These goals were summed up by an anecdote recounted by Catherine Hunt, president of the American Chemical Society, who recalled asking students at an international green chemistry conference for their shortest definition of sustainability.13 Their answer: “It’s chemistry that allows us to thrive today without screwing up tomorrow.”


The Island Press Toxics Reader  

Collected chapters from Island Press books on environmental health.

Read more
Read more
Similar to
Popular now
Just for you