HURJ Volume 05 - Spring 2006

Page 1

HURJ Spring 2006 Issue 5


“For every complex problem, there is a solution that is simple, neat and wrong.� H.L. Mencken


Table of Contents 2006 focus: bioethics { 16

Power to the Patient

{ 18

Patenting Life

Adam canver

Ivana Deyrup

{ 20

The Leadership Crisis of Hurricane Katrina

{ 23

Cloning: A Religeous Experience

{ 27

The Injustice of Newborn Screening

{ 29

Do-Not-Resuscitate Orders

{ 33

New Approaches to the Stem Cell Controversy

{ 36

Science and the White House

Jeffery Lin

Bilal Farooqi

Defne Arslan

Manuel J Datiles IV

{ 38

Gandhi Vallarapu

Bryce Olenczak

What is a Person? A Multi-faceted Look at Personhood Jason Liebowitz


research spotlights: Origins of Stem Cell Research ... Interview with Dr. John J. Boland ... The History of Genetics ... Procedural Memory and Sleep ... The New Influences in Old Cairo ... The Future of Bioethics ... Pearl Harbor: The Public Health Perspective ... Selecting Subjects for Clinical Studies ... A High-Fat Diet to Treat Seizures

research reports: engineering 42

Exploiting The Body’s Army: Generating Antibodies to Study Cardiac Muscle Development

45

Influence of Electric Fields on the Interaction of Bubbles and Fluid Surfaces

Jon Edward

Chris Kovalchick

research reports: sciences 49

The Future of Integrated Circuits

Laura Doyle

52

Nanometer-Thick Dielectric Films Deposited by Electron Cyclotron Resonance (ECR)

Matthew J. Smith, Ling Xie, Erli Chen

research reports: humanities 54

Outsourcing American Jobs? Addressing Economic Globalization

58

The Ethical Imperative of Retelling The Sinking of the Wilhelm Gustloff and Generational Silence

62

Comparison on the Crafting ofPost-WWII Healthcare Legislation in the United States, Great Britain and Canada

Amanda Leese

Sarah Adee

Jyoti Brar


Letter from the editors

{

On behalf of the staff, we would like to welcome you to the fifth issue of the Hopkins Undergraduate Research Journal (HURJ). HURJ is an entirely student-directed enterprise, written, edited and designed by undergraduates at Hopkins. HURJ was formed in 2001 with the intention of keeping the community informed of the university’s intellectual achievements and promoting undergraduate research at Hopkins. Since that time, HURJ has made improvements each year in the quality and quantity of the journals’ content and, in the process, has created a forum for the discussion of projects and pursuits in various disciplines. The journal has grown from chronicling only original research in the Spring 2002 issue to having three sections – original research, student-written focus pieces and faculty and student spotlights – in its last incarnation. This growth has continued into this year as HURJ generated enough interest to become a bi-annual journal and produce a first ever Fall issue. Indeed, the HURJ staff this year is the most talented yet, having survived a difficult application process necessitated by the tremendous interest in the journal. This new expansion marks the continued maturation of HURJ as a unique and valuable resource for undergraduates. In this issue, HURJ explores the tremendous diversity of research and intellectual pursuits on this campus. In Spotlights, HURJ chronicles the experiences of several Hopkins professors and students ranging from stem cell experts to Woodrow Wilson Fellows exploring the effects of Pearl Harbor on AsianAmerican relations. In keeping with HURJ’s desire to represent all academic pursuits, the research section presents a diverse array of research topics representing work conducted in the humanities, sciences and engineering. Indeed, it is a credit to the student body at Hopkins that this journal can feature an exposition on economic globalization alongside an article discussing the engineering of dielectric nanofilms. The focus this year explores various facets of bioethics. The advances in medical science observed over the past few decades have created with them several grey areas that warrant discussion, particularly at an institution that frequently finds it self at the forefront of such advances. Articles in this section include a discussion on the bioethics of do-not-resuscitate orders (CPR to revive patients, incidentally, was a procedure that was developed by a cardiologist at Hopkins), the portrayal of medicine on television and, on a larger scale, the ethics of the government’s approach to bioscience and their handling of the Hurricane Katrina disaster. This issue of HURJ would not have been possible without the dedication and support of the HURJ staff, Hopkins faculty, and HURJ sponsors. We would like to thank the students and professors who have contributed their time by reviewing articles, researching topics and sharing their experiences with the Hopkins community. We also thank the Student Activities Commission and the Office of Student Life for their continued support. Finally, we are indebted to the Digital Media Center for the use of their facilities, equipment and their undying support. We welcome your ideas, comments and questions and hope this journal serves as encouragement or inspiration to begin your own project and contribute to the intellectual advancement of this university.

Sincerely,

Sravisht Iyer

Priya Puri



HURJ 2005-2006 Editorial Board: Editor-in-Chief of Operations ....................... Sravisht Iyer

Editor-in-Chief of Content ....................... Priya Puri

Editor-in-Chief of Layout ....................... Nik Krumm

Focus Editor

....................... Sadajyot Brar

Science Editor

....................... Eric Cochran

Humanities Editor

....................... Winnie Tsang

Engineering Editor

Hopkins Undergraduate Research Journal Johns Hopkins University Mattin Center, Suite 210 3400 N. Charles St. Baltimore, MD 21218

....................... Preet Grewal

Spotlight Editor

....................... Daria Nikolaeva

Sidebar Editor

....................... Jennie Wang

hurj@jhu.edu http://www.jhu.edu/hurj

Layout Editors

....................... Naomi Garland ....................... Bryce Olenczak

Copy Editor

....................... Defne Arslan

Public Relations/Fundraising ....................... Gaytri Patel

HURJ Staff: Ishrat Ahmed Adam Canver Cristian Chun Manuel Datiles IV Bilal Farooqi

Yasmin Husain Yolanda Lau Jason Liebowitz Jeffery Lin Marwa Mansour

Alexander Mo Virginia Pearson Divya Sambandan Steven Shu Gandhi Vallarapu



research spotlights Origins of Stem Cell Research Yasmin Husain / HURJ Staff Writer

Stem cell research, along with all its implications on cloning and cancer treatments, is a hot topic in many labs around the world. Dr. Bortvin, a researcher at the Carnegie Institution in the Department of Embryology studies just this. Bortvin’s previous experience comes from completing a MD and MS in biochemistry from Pirogov’s Moscow Medical Institute and then a PhD in genetics from Harvard University. A postdoctoral fellow he worked with at the Whitehead Institute first got Dr. Bortvin interested in the epigenetic programming of development, origin of germ cells, and role of developmental pluripotency- topics he now researches. Using mice embryos as an adequate model for human embryonic cells Dr. Bortvin currently studies the origin of germ cells and in particular the function of the nuage, a structure found in all germ cells of whose function very little is known. By cloning cells and then performing a nucleus transfer from a mutant (incapable of

supporting life) into a normal cell it can be determined whether or not the normal cell will be fertilized. Dr Bortvin states that the “fact that cloning succeeds tells you that [fertilization] can be done.” It is known that genetic information in somatic cells is not lost but can in fact be reprogrammed. In his research Dr. Bortvin analyzes how the developmental clock can be reset and studies the lineage that actually does it. These mammalian studies of germ cells in early embyrogenesis, from when fertilization takes place up to gastrulation, provide key insights into normal development. From this knowledge, the reproductive fitness of society can be examined, the results of which will lead to many newfound conclusions in stem cell biology.

Interview With Dr. John J. Boland Manuel Datiles IV / HURJ Staff Writer HURJ: Dr. Boland, are you still engaged in research at the moment? JB: Not routinely, I retired 2 years ago. I get involved in some research programs now, but am not part of any regular research. HURJ: Out of all the research you’ve done, what topic did your most interesting research focus on, and could you briefly give an overview of the research? JB: The research I’ve done is in the area of water and energy, and environmental issues, usually the economic aspects of them. The most interesting research problems to me usually have been things having

to do with water and sanitation problems in cities, both in the developing world and industrial countries. HURJ: So how did you get into this research? What interested you about it at the start? JB: Well, before I had any graduate degrees – in fact before I had any degrees at all – I was working for a city in Pennsylvania (Erie, Pennsylvania) and I was superintendent of the pumping and purification for the water bureau, so I had firsthand knowledge of urban water supply problems. Later I moved to Maryland, and I was chief of utility operations for Ann Arundel County in Maryland, so I was

responsible then for all the water systems and waste water systems of the county. Thus, as I finished my undergraduate degree and graduate degrees, it was natural for me to continue to be interested in those fields. I actually started out as an engineer. My undergraduate degree was in engineering and I got as far as an ABD in Environmental engineering before I switched fields to economics. The reason I decided to switch was because of the work I was doing with water sanitation: I started out thinking that the engineering problems were the more interesting probContinued on Page 8...


On Enviornmental Engineering problem. So those are some of the things I’m interested in, all in the field of water sanitation. Now in other areas where I work, in environmental things and energy things, it’s pretty similar: I do things with tariff design, and project evaluation, which means costbenefit analysis - mostly the work is applied economics, as applied to these I actually started out as an engineer: I started out thinking that problems. HURJ: the engineering problems were the more interesting problems, but Y o u ’ v e b e e n ... came to realize that the economic problems were the interesting w o r k i n g in this field for ones; the engineering problems weren’t that challenging. de cades. W h a t have been the most interesting and finally did my dissertaresults of your research? tion on economics. And since JB: Well, we’ve found that then, I’ve worked mostly on most water systems in induseconomic issues. trial countries and virtually all HURJ: So what areas of enwater systems in developing vironmental engineering are countries have tariff structures you interested in? that are absolutely unsuitable JB: Well, in the areas of water for the applications they were sanitation, I’m really interested supposed to be designed for! in tariff design. This would inThe nature of the tariff systems clude, for example, how much worldwide, even in industrialto charge for water and sanitaized nations, is that essentially tion, how to structure the rates the wealthy and middle class and so forth to meet various people wind up getting waobjectives. I’m also interested ter and sanitation services at in benefit-cost analysis of waa subsidized rate, while the ter conservation, water reuse, poor wind up getting little or things like that; some of my no service and paying a lot for work also involved forecasting it! This is a result in part of the the demand for water, which way the systems are designed, is primarily an economic Continued from Page 7... lems, but as I worked more and more, I came to realize that the economic problems were the interesting ones; the engineering problems weren’t that challenging – the economic problems were the truly difficult ones. So I switched my own graduate work from engineering to economics,

a result of the way they are subsidized, and a result of the way they charge people. And no one can seem to figure out that this is happening! In spite of decades of preaching that we need better tariff design, unfortunately there is very little progress to show for it, to date. HURJ: HURJ is primarily a research journal for undergraduates to publish their own research. Do you have a message you would like to tell the undergraduate students? JB: I have one main message for the undergraduate students: It’s important – when you’re deciding what you’re career is going to be – it’s important to pick something that you actually enjoy, because if you enjoy it, it’s likely that you’re good at it. And if you’re good at it, it’s likely that you’ll be successful at it. You shouldn’t just pick a topic that someone tells you that you ought to do. I was fortunate enough to already know what I liked, since I was already working in the water and sanitation field, and I knew I enjoyed it, so I just kept going along with that field. It’s tougher for an undergraduate, or a student who hasn’t worked, because they’re not sure what they enjoy yet, so they have to figure it out. But good luck to all of the undergraduates, and I wish all of them the best.


The History of Genetics Alexander Mo / HURJ Staff Writer

1950

1960

Upon entering the office of Dr. Nathaniel Comfort on the third floor of the Welch Library at the Johns Hopkins School of Medicine, a different yet familiar feel will strike the student familiar with the Hopkins Homewood campus. This room exudes quaint antiquity and a look more akin to the hallways and offices of the History Department in Gilman Hall, rather then the whitewashed walls and sterilized hallways usually associated with a medical institution. Yet a close look at the books lining the shelves reveals that this office still closely tied to the subject matter at the core of any medical school. This blend is not accidental; in fact it is the job of Dr. Nathaniel Comfort as an assistant professor of history of medicine. Dr. Comfort has maintained a deep seated interest in chronicling the scientific and societal implications of science within the last century. In this area, he has special expertise in the development of the study of genetics. He first found his interest in this area through his long conversations with Noble-laureate and geneticist Barbara McClintock while working in press department of Cold Spring Harbor Laboratory in New York State. Nobel Prize-winning discovery of jumping genes or genes that can move to different sites in the genome. “She was still very sharp even in old age,” said Dr. Comfort. He would later use his earlier experience as inspiration for a myriad of work including a book and many journal articles. Dr. Comfort even chronicled the series of experiments that McClintock conducted that led to her Nobel Prize

1970

1980

1990

2000

for his PhD in history at the State University of New York at Stony Brook. Nowadays Dr. Comfort seeks to tell the story of how genetics became the way it is today and its impact on society over the years. “What I hope that comes out is how medical genetics emerges as the foundation of biomedicine,” said Dr. Comfort. What he has uncovered so far is a dramatic shift in how genetics was studied. In the early 20th Century, genetics was in the domain of agricultural breeders seeking to breed better plants and animals. Most genetics research was conducted in colleges of agriculture. Then during the 1950’s there was a dramatic shift from the college of agriculture to medical schools. Comfort explains, “In an 18-month span in 1956-57, Hopkins and at least four other major universities established departments or divisions of medical genetics.” Using this as starting point, Comfort notes this trend also coincided with a major upshot in federal scientific funding and important breakthroughs in genetics research. Growing governmental and academic acknowledgement of genetics as an important science also greatly increased the speed at which the field developed. From this growth a new trend emerged during the 1970’s; more and more geneticists were starting to hold a PhD instead of a MD. Today genetics is filled with a mix of PhD and MD, with a majority PhD. However, as with any continuing saga, there is always some other event to weave into the tale. Currently, Dr. Comfort is investigating how these two groups of acadamians interacted as genetics gained prominence. A big source of information actually comes directly from the geneticists themselves. With collaborators at UCLA, Dr. Comfort interviewed 100 retired human geneticists to find out what it was like working in genetics in their younger days. Dr. Comfort is now trying to piece together from these first hand accounts what the interaction must have been like. And with time and persistence, Comfort will one day have a complete picture how genetics developed during the Twentieth Century.


Procedural Memory and Sleep Virginia Pearson / Contributing Author

Procedural memory describes an acquired, unconscious knowledge of actions and procedures. This type of memory is often classified as implicit memory, as the learner is not required to actively think in order for recall to occur. My research focuses specifically on the characteristics of labile procedural memory formation and disruption. I began my research after my freshman year subsequent to a class I took that was focused on sleep and its associated disorders. I submitted a proposal and now receive funding through the Woodrow Wilson Fellowship here at Hopkins. This research interests me because the particular issue of how a procedural memory is affected by different amounts of wake combined with different amounts of sleep has not been previously addressed. A prior study has suggested that sleep serves to enhance a procedural memory, yet introducing a different “interference” pro-

cedural task may disrupt this stabilization. This hypothesis has not yet been substantiated, however, and it is currently unclear when the ‘unlearning’ of the original task occurred (whether it is a function of time alone, wake alone, or sleep is needed). The protocol I came up with consists of six different groups that test at different times subsequent to learning a specific task designed to target implicit memory. Each group has a particular amount of time (consisting of either wake, sleep, or both) past the learning of the original task corresponding to when they will test. Some groups serve as controls. With this data, I will then be able to analyze the results and make comparisons between the groups based on each group’s performance. Hopefully, this will lead to establishing whether a procedural memory is stabilized with wake alone, sleep alone, or a mixture of both.

The New Influences in Old Cairo Marwa Mansour / Contributing Author

Over the summer, I researched the quality of health care in the area of Old Cairo in Cairo, Egypt for the World Health Organization. I assisted with the administration of medical examinations and the startup of a developmental project (Basic Development Needs) to improve the standard of living in order to increase health awareness. I find the research very intriguing because of the amount of time the people living in the Old Cairo squatter settlements have been neglected. Being of Egyptian descent and traveling to Egypt

10

a good deal, I have always witnessed the poverty that led to such a poor quality of health care. The medical assessment that I helped conduct there truly can reshape healthcare in the neighborhoods we went into. The research yielded a surprisingly large resultant population of diabetics and people with hypertension. Incidentally, projects like the one I was working with (Community Based Initiatives) has resulted in an increasing financial dependence on loans and developmental institutions.


The Future of Bioethics Ishrat Ahmed / HURJ Staff Writer

Dr. Hilary Bok believes in “working ahead in advance to spot problems that haven’t happened.” As a bioethicist, she is interested in ethics, freedom of the will, and the moral theories of Kant. She was a Laurance S. Rockefeller Visiting Fellow and authored Freedom and Responsibility. She has also written several articles on bioethical issues including pet cloning. At JHU, Dr. Bok not only does research, but she also teaches an introductory bioethics course. Bioethics began in the 20th century to minimize research scandals, but then progressed to investigating problems before they materialized. Dr. Bok is interested in preemptive public

health issues such as military-enforced quarantine and also ethical issues that arise as a result of mind-brain relationships. For example, does the usage of psychiatric drugs treat people as things? Prozac behaves similarly to the endorphins that result from running; yet, no one would claim that running is wrong. Dr. Bok enjoys bioethics because of the needed familiarity with science, public health, and medicine. She also finds teaching ethics and leading students to think critically very rewarding. In fact, she wants the students to realize, “I really can devote my intellect to how I should live my life.”

Pearl Harbor: The Public Health Perspective Manuel Datiles / HURJ Staff Writer On December 7, 1941, the Japanese Imperial Navy launched a massive attack on Pearl Harbor, bringing the US into World War II and forever changing the course of American history. But how did this event shape the history of the Asian countries under American rule at the time? As a Woodrow Wilson Fellow at Hopkins, I was given a $7,500 grant to conduct research on the impact of the Japanese Invasion of the Philippine Islands, which are located in the Pacific Ocean, southeast of China. Specifically, I have been researching the public health impact of the Japanese occupation, and how the civilian population (both American and native Filipino) responded to the harsh war conditions. What was it like to be a prisoner in one of the Japanese Internment Camps, where the Japanese worked the prisoners of war like slaves until they died from exhaustion, tropical brain fever, or malaria? What did the nurses in the American civilian camps undergo as they watched

scores of their fellow countrymen lose 50 lbs of weight and ultimately succumb to malnutrition and starvation? How did the American survivors endure this unendurable crucible? How did they deal with epidemics and malnutrition, with the burial of the dead and with the unforgiving Japanese officers? To answer these questions, I flew to Manila, the capital city of the Philippines. I visited the archives of several prominent universities, and uncovered the reprinted diaries of several American civilian prisoners, as well as many photographs of wartime Manila (the capital city) which were only available in Manila. I also personally interviewed two Filipino World War II survivors, who related their experiences of the Japanese occupation to me firsthand, describing harrowing escapes from marauding Japanese troops while hiding in piles of dead bodies, or laying still in a pit while bullets from both sides ricocheted all around, killing fathers, friends, and nameless others. I also was

able to visit Fort Santiago, the torture chamber of the Japanese for the recalcitrant American soldiers and pro-American Filipino natives; I saw deep pits by the Pasig River in which the Japanese placed the most stubborn prisoners - to drown as the tide came in slowly, and to be washed away when it ebbed. As a Filipino-American, this historical period has deep significance for me, especially because it seems to be a relatively unknown area of World War II that I feel needs to be revealed to the world, lest we forget what horrible atrocities were committed, and what heroic actions saved lives. As a Public Health Studies major, I also chose to research the public health of this period because my research promises to be relevant to the problems and issues of public health during wartime and military occupation. I hope to write a research paper on this topic for a historical journal in the future, with the help of my outstanding faculty mentor, Dr. Paul Kramer of the History Dept. 11


Selecting Subjects for Clinical Studies Alexander Mo / HURJ Staff Writer

Justice, like so many highly valued ideals, medical appointments related to the study. The is easy to roll off the tongue, but difficult to final decision often came down to the personal implement. Even in professions such as medijudgment of the nurses and doctors involved in cine where ethical dilemmas are as common the study. Preference of one candidate over the as surgery, the years of experience makes it no other was often made through subconscious easier to administer justice. Therefore it is up to preferences, thus the researcher was often unpeople like Dr. Holly Taylor, Assistant Professor aware that they had even favored one social facin the Phoebe R. Berman Bioethics Institute at the [The] study will encourage researchers to make sure Johns Hopkins Bloomberg School of Public Health to justice is served as it applies to selection for clinical studcarefully examine ethical challenges facing medical ies. “I want researchers to implement a more transparent professionals and establish guidelines to address selection process,” Dr. Taylor said. them. Dr. Taylor handles ethical questions in medical research. Justice for her in medical research tor over another. Over the long term, a routine of is “the benefits of research [being] spread out [to exclusion emerges leading to two-fold issue. On all] and equitable.” the participant side, a particular population is efDr. Taylor came to her present position affectively cut from potentially beneficial research, ter a 10+ year journey that started shortly after and on the researcher’s side the test subjects are graduating from Stanford University in 1987. not an accurate depiction of a larger population. She found that she wanted to focus more on After examining the results of her study, Dr. how health care was administered than practicTaylor came up with guidelines that address the ing medicine, her first aspiration. “I was more problems she saw. “It’s okay for researchers to interested in policy than interacting with peouse non-medical reasons in the selection process ple,” said Taylor. An internship at the National as long as they develop objective criteria [for Women’s Health Network inspired her to pursue those reasons] that can be tracked,” she said. One a Masters of Public Health from the University way to do this is by establishing an attendance of Michigan, which she completed in 1990. Durpolicy for children and parents that clearly states ing a five year stint in government, Dr. Taylor bethe number of appointments a child could miss came very interested in bioethics policy. In order before the child was dropped from a study. Dr. to pursue that interest, Dr. Taylor chose to come Taylor hopes her study will encourage researchto the School of Public Health to pursue a docers to make sure justice is served as it applies to torate which she finished in 1999, and now she is selection for clinical studies. “I want researchers currently teaching as an assistant professor. to implement a more transparent selection proAs a professor, Dr. Taylor is currently invescess,” Dr. Taylor said. tigating how medical researchers implement In the long run, Dr. Taylor hopes her studfederal regulations regarding child test subjects ies will promote a greater awareness of ethical around the nation. In a recent study, Dr. Taylor issues involved in, not just in children, but all interviewed nurses and doctors around the nahuman test subjects. A vocal discussion of any tion to see what factors they used in making the issues will hopefully lead to improved health final selection for child test subjects. With many care policy providing for the “maximum welfare of the quantitative criteria (age, sex, ethnicity, of human subjects.” As long as there people still etc.) accounted for, Dr. Taylor found that selecgetting sick, there will always be the question of tion was primarily based on qualitative social what the best way to treat them is. Dr. Taylor befactors (perceived ability to comply with study, lieves she has found her way to contribute to the access to transportation, etc.). A common factor answer. “In order to make [health care] policy that came up was parental reliability in making better, research is my way,” said Taylor.

12


A High-Fat Diet to Treat Seizures Cristian Chun / HURJ Staff Writer On the dinner table of a child with epilepsy, there are a cup of whipping cream, two sausages with butter, some cashews, and a half cup of puritan oil. This is the child’s well-planned meal. Children with epilepsy are treated with a rigorous high-fat , low-carbohydrate diet called the ketogenic diet. The diet, composed of foods that would normally be avoided by a person, can remarkably reduce seizures. Although the ketogenic diet was discovered in 1920s, it was distrusted because of its unknown mechanisms. Today, it is widely accepted and helps two out of three children who are put on it, thanks largely to John Freeman, M.D., former director of the Pediatric Neurology service at Johns Hopkins Hospital. Ketogenic diet recommends 90 percent fat with 4 to 8 grams of carbohydrates and 1 gram per kilogram weight of protein daily. The fat and the lack of carbohydrates is what gives the desired effect. Normally, in the presence of carbohydrates, fat is burned to carbon dioxide and water. But without carbohydrates, fat is

incompletely oxidized and the blood levels of ketones begin to rise. Ketones are largely responsible for the reduction of seizures. Studies have shown fewer seizures with increase of ketones. In the 1920s, people discovered that fasting 10 to 20 days could control seizures. It was found that this was because severe fasting prompted increase of ketones in the body. Thus, ketogenic diet was developed to mimick fasting. Like fasting, the diet increases ketones in the body, leading to reduction of seizures. When it was tried on patients, rapid decrease of seizures was observed. However, in 1939, Houston Merritt discovered an anticonvulsant called Dilantin, and the pill became a more popular as well as a more convenient method of treatment. A resurging interest in the ketogenic diet began with a phone call Dr. Freeman received from Hollywood producer Jim Abrahams in 1993. Abrahams called Dr. Freeman and said that his young son was suffering from hundreds of seizures everyday. He had

already seen five pediatric neurologists, been through all the medications, and had had a surgery with no positive effect. Abrahams found out about ketogenic diet in a library book. When his son was put on the diet, his seizures were completely removed. Outraged that no one had informed him of the diet, Abrahams started publicizing it actively. Dr. Freeman and his team members began receiving thousands of phone calls, and ever since, interest in the diet has remained steady. More studies on ketogenic diet have produced strong evidence that it works. However, exactly how and why this diet works still remains unknown - almost as unclear as it was in the 1920s. Dr. Freeman is still a physician at the Johns Hopkins Hospital pediatric neurology department today. He will soon to retire, however, and has little time before he can no longer enjoy the familiarity of working at the hospital or comfort the many mothers of treated children who cannot hide tears at the thought of seeing him leave.

O C

CH2

CH2

CH2

CH2

CH2

CH2

CH2

CH2

CH2

CH2

CH2

CH

O-

O C

CH2

CH2

CH2

CH2

CH2

CH2

CH2

CH2

CH2

O-

13

CH2


How Medical Television Shows Portray the Patient-Doctor Relationship Adam Canver

The purpose of primetime television is to appeal to the obsessions of the viewers. What is shown is dictated by what society values. Such shows as House M.D. and Nip/Tuck portray social problems in healthcare. Both illuminate the other side of being a glorified doctor involving moral and ethical judgments and misjudgments. These shows, now in their second and third season, respectively, are in response to society’s interests: the “real” ethical battles behind an all-powerful doctor. Dr. Gregory House, a diagnostic physician, in House M.D. is notorious for violating the sanctity of the patient-physician relationship. He is convinced that his ethical compass for what is right versus what is wrong is ultimately always correct. Dr. Sean McNamara and Dr. Christian Troy, two plastic surgeons, from Nip/Tuck tackle much different issues relating to the ethics of doctors. The broad view of the show is that they have the ability to change the physical exterior as a person sees fit through plastic surgery. Within this ethicallyquestionable world, ideas such as pro bono patients and doctor-assisted suicide further the ethical dilemmas with 14

which the team struggles. Both shows focus on the physicians to contrast the transcendental idea of a doctor with the actual role of the doctor. DNR is a common acronym around the hospital. It stands for Do Not Resuscitate. Such an acronym would be used when a patient signs a waiver disallowing any hospital staff to aid in reviving him or her during a losing battle with death. It is designed to give the patient the most control over his or her own life. Doctors are liable and can even lose their licenses for violating a DNR agreement. While most doctors would abide by these rules, Dr. House does the exact opposite. He intervenes to resuscitate a patient and thus destroys an ethical bond based on his conviction that he is right. The patient, a paraplegic, ultimately walks away because of House’s intervention. Does this end justify House’s unethical means? If he had abided by the rules, the man would be dead. If it can be assumed that life is more important that all other things, then he is justified. The writers of the show also seem to assume this to justify House’s actions. A stubborn egotist, House spends a lot of time arguing over being right or wrong. Pride

is a doctor’s greatest sin, which is not any different for House. He creates his own ethical laws that suit him. To him, relativity and subjectivity are illusions to hide absolutes and objectivity. That is to say that the “right” response and “wrong” response both exist. His pride gets in the way when someone contradicts what he believes to be “right.” He goes far enough to even trivialize life. When a treatment is based on insufficient testing and other doctors object, he simply states, “if he lives, I’m right and if he dies, you were right.” It appears that it is no longer about the patient, but more about increasing a cocky doctor’s already inflated ego. However, House has yet to make a mistake that would result in a dead patient. This incredible fact is what makes the show a show and not necessarily real. The truth is that misdiagnoses occur and the media doesn’t want to show a real doctor, but rather one with infinite intuition. The role of this doctor is to challenge petty ethics and morals to ultimately save lives without bias or discrimination. He was confronted with his opinion on the Hippocratic Oath, the document that has been used to swear in physicians


ever since the ancient Greek father of medicine wrote it. House thinks that the sacred document is outdated, as it condemns such procedures as abortions. Rather than rely upon the socially and historically agreeable codes of ethics, House creates his own set of ideas based on his own beliefs about right and wrong. He has what it takes to be a great doctor since he puts aside formalities and unnecessary considerations. He does this not so much for pride, although it is part of it, but more for the patient’s well-being. The plastic surgeons of Nip/Tuck abide by the same ethics that House discards as inhibitory to patient survival. The field of medicine of plastic surgery is a broad ethical question unto itself. “What don’t you like about yourself?” This inquiry is made in virtually all episodes. The show struggles with the ethics in changing one’s appearance for any reason. Is it ethical for a doctor to provide breast implants or other enhancements with no medical need? The show is able to come up with several instances in which such procedures are ethically sound. A woman with breast cancer had to have her breasts

amputated. Her motives to go through with the surgery were not because of a little mammary inadequacy, but a total lacking of the features. Another issue arose when after the surgery, she asked Dr. Sean McNamara to aid her in killing herself. She was terminal due to the cancer. McNamara weighed the ethics of patient-assisted suicide. Are doctors able to provide such a service as ending life that presumably will end in a predictable period of time anyway? This idea holds that life is meaningless once the expiration date of a human is discovered. The “right” or “wrong” choice is based on one’s point of view about life and death. House is convinced that life is always the only way to go. McNamara takes into account the circumstances. Perhaps death isn’t always something to be viewed negatively. This is how he justified his mercy killing. Healthcare cost is an increasingly controversial topic. Insurances shrewdly squeeze every dime out of their clients and hospitals alike. Pharmaceutical companies with monopolistic patents mark up prices significantly. Nip/Tuck just brushes the surface of these ethical dilemmas when Drs. McNamara and

Christian Troy institute pro bono cases at their practice. A more economically diverse clientele can receive their services. This is especially necessary for these doctors because of the relatively high ratio of profit from the majority of operations with relative ease. The writers of Nip/Tuck give patients much more power than do the writers of House M.D. While a patient can convince McNamara to help kill her, House would not even think of it as an option. In fact, he would sarcastically mock the patient for being weak and confused about whether or not life is better than death. Both doctors from Nip/Tuck and House are considered experts in their respective fields. As portrayed on television, they are glorified for their ethical choices. The relationship between them and their patients is always the focus. Is it better to have a doctor be curt and even mean, but ultimately save your life, or to be overly-conscientious with less credibility to make any assurance of life-saving questionable? Are plastic surgeons always amoral misers with superficial concerns? These are the ethical ideas that both shows force the audience to think about based on their influence. 15


Patenting Life Ivana Deyrup

GATCGTCCATGACTTCAAGCAATGACTAGCGTTACATAGACTACAGATACATCCAG TGCACTATATCTTACTCTATCTTACTTACTTACTTCATCTTATCTTACCCCTCGGAG Amgen

Outside of law school classrooms, heated arguments over patent law can be safely said to be relatively rare. Recently, however, a new kind of patent – patents on human genes – has been attracting media attention and generating some very serious debate. The proponents and opponents of these “patents on life” disagree fundamentally about the future of American patent law; however, both groups make one thing clear – the stakes of this debate are already high, and will only continue to rise. Both the technological and legal ability to patent human genes are new developments. In the 1980s, the Supreme Court decided to allow patents on both genes and genetically engineered organisms. This decision, which allowed a private company to patent bacteria that had been designed to help clean up oil spills, passed almost unnoticed by the American public. The first major controversy over the decision occurred when the US Patent Office allowed Harvard University to patent a genetically engineered mouse used in cancer research. In a rare union between some scientists and religious organizations, opponents to the patent argued that new types of life should not be patented. Despite these protests, the US Patent office has issued thousands of “patents on life,” including (as of October 2005) 4,382 patents on human genes. This new ability to patent human genes has caused a rush of patent applications that some have compared to a “gold rush” by “bio-prospectors.” By 2003, the US Patent Office had issued approximately 1,800 patents on human genes, and was struggling to deal with an additional 20,000 pending patents. By 2005, almost 20% of the human genome had been patented, and one company, Incyte Pharmaceuticals, held patents on more than 2,000 of the 23,688 known human genes. These patents can be held for up to 20 years, after which time the genes become public property. It is clear that these private companies believe that there are significant profits to be gained from patenting genes – and equally clear that many believe the costs of these private patents are also likely to be significant. Those in favor of allowing private companies to patent human genes claim that these patents are necessary because they allow private companies to recoup the often very high costs of scientific research. Without these patents, some argue, no one would be willing to conduct research on human genes, as there would be no direct profits or benefits. The companies that pour their resources into these projects argue that they should have the right to re16

ceive the benefits of their work, money, and time. Refusing to grant patents could also create incentives for scientists to hide the results of their work from competitors, as sharing information could mean losing money. In response, opponents of patents on human genes claim that that patents should never have been applied to genes in the first place. Patents, they argue, make sense for inventions or innovations. Genes, however, do not seem to fall into either of these categories. Considering that most genes are discovered by computers sorting through information, rather than by research scientists, it is unclear why anyone should be granted a patent on what does not seem to be a new invention. Patents on human genes may make no more legal or ethical sense than patents on a particularly type of rare wildflower or on a newly discovered planet. In fact, some argue that if the genes are going to belong to anyone, they should belong to the human beings who carry them. For example, the Miami Children’s Hospital and the Canavan Foundation did extensive research into the gene for Canavan disease, a rare childhood illness that primarily affects Ashkenazi Jews. The Canavan Foundation recruited families to provide blood samples in order to identify the gene causing this disease. The Miami hospital then patented the gene, and started charging royalties on tests for Canavan disease. The families who had donated blood responded furiously to the patent, pointing


Who should own our genes? Bristol-Myers Squibb

GTCAGGCATATCAGATGAGACAATACACGTACGTAGGCATGTAGCTGTATACATA GTAGCTGGGACTGATGTGGGTATTATAGTAGCTGCATGACGACTTCAAGCAATGA CTAGCGTTACATAGACTACAGA Immunex TACATCCAGTCAGGCATCACG TACGTAGGCATGTAGCTGTAC out that they donated their blood in order to help prevent GTACTGCACTATATCTTACTCTA a rare childhood disease, not in order to allow the hospital to make money off their genetic makeup. TCTTACTACTTACTTCATCTTAT Additionally, others point out that scientists frequently make discoveries that yield significant benefits without CTTACCCCTCGGGAGTAGCTG patenting their work. Merrill Goozner, in his article “Patenting Life”, published in The American Prospectus, points ATGTGGGTGATATTATAGTAAT out that “No one had to patent the silicon molecule before developing the microchip. No one had to patent the cornea to get a better corrective surgical procedure. Why is patenting a gene necessary to develop a drug or diagnostic test?” It seems likely that research into the human gene will continue to be conducted by private companies, even if they cannot patent actual genes, because it is still possible for them to make significant profits off the applications of their research. Additionally, even if private companies cease researching human genes, the success of the Human Genome Project shows us that the public sector is both able and willing to fund research into this area of scientific development. In fact, allowing companies to patent genes may actual prevent, rather than encourage, scientific research. Some companies have begun patenting as many genes as possible, not to do research with those genes, but rather in order to charge other companies royalties in order to use these genes. For example, companies like Celeron, (which in 2000 had the second largest number of patents on human genes in the United States) have, according to Michael Castells, in his book The Rise of the Network Society, “been busy patenting all their data, so that, literally, they may one day own the legal rights to a large proportion of the human genome.” If private companies own a significant number of patents on human genes, some scientists argue, it will be very difficult to do any sort of research into human genetics without having to pay prohibitively high

Pfizer royalties to the companies that own the patents. As a result of this debate, some scientists and pharmaceutical companies have started to view patents on life as a significant threat. For example, the Human Genome Project, which is publicly funded, has published its results in an attempt to prevent private companies from patenting many genes. The pharmaceutical company, Merrick, has also given substantial funding to Washington University in order to encourage public research into genes. The debate over patents has becoming increasingly important in the European Union as well. In the future, the debate will most likely become more heated, as the patents begin to have broader and more serious implications and consequences. Those holding patents on genes literally own both the present and the future of our species. To ignore this issue, or to relegate it to law classrooms would be almost criminal of us. As our ability to understand our own genes develops further, we must continue to seriously ask ourselves the question – who, if anyone, should be allowed to own the building blocks of every human being?

References: 1� 2� 3� 4�

Castells, Michael. The Rise of the Network Society. Oxford: Blackwell Publishing, 2000. Goozner, Merrill. “Patenting Life.” The American Prospect. December 18, 2000. URL: www.prospect.org/print/V11/26/goozner-m.html. Ravilious, Kate. “Private Companies Own Human Gene Patents.” The Guardian. 14 October 2005. URL: www.guardian.co.uk/genes/articles/0,2763,1591992,00.html Widge, Alik. “A Primer on Gene Patents.” AMSA. 2001.

17


The Leadership Crisis Jeffrey Lin

On August 24, 2005, America witnessed one of the most catastrophic natural disasters in history strike the city of New Orleans in the form of Hurricane Katrina. Hindsight tells us that the hundreds of deaths that resulted could have been avoided. Did our policymakers let us down? Did they make the wrong decisions regarding Hurricane Katrina? The debate about the answers to these questions will go on for years, but the fact is that we must look forwards and not backwards. These types of disasters will occur again, whether in the form of hurricanes or terrorist attacks. The question of how prepared we will be when they do happen is at the heart and soul of bioethics. Keeping this philosophy in mind, it is necessary to reexamine Hurricane Katrina in a different light. In the midst of any crisis, certain individuals step up and make fundamental decisions regarding the resolution to the event. Whether these decisions, which are often made in a split second, are right or wrong is something that is extremely subjective. Instead, attention should be placed on the actual people making thowse decisions. In this article, I will attempt to answer two questions: 1) Which people were responsible for the decision-making during a crisis situation such as Hurricane Katrina? 2) Perhaps, more importantly, are these the people that should be making life-altering decisions? Who Are We Talking About Crisis managers can be separated into two categories: political leaders and non-political citizens. While the differences between these two classes seem rather mundane, important consequences arise due to the innate characteristics of the two groups. In general, media attention is focused solely on the political leaders during a crisis situation. By simply browsing newspapers, magazines, and websites, we can easily identify the members

18

of this group. In the case of Hurricane Katrina, this group consisted of several members including: n n n n n

President George W. Bush New Orleans Mayor Ray Nagin Louisiana Governor Kathleen Blanco Federal Emergency Management Agency (FEMA) Director Mike Brown Homeland Security Secretary Michael Chertoff

With the media spotlight on these individuals, the scope of their decisions tended to be broad and inclusive, trying to help the most number of people as possible. Their actions included declaring states of emergency, requesting outside assistance, and organizing relief efforts. Since so much is written about political leaders, it is easy to think that non-political citizens do not factor into crisis management, but it is often this group that makes the most pivotal decisions. The members of this group include family members. Would there have even been a crisis if family members had decided to move the family out of New Orleans before the hurricane hit? Of course, this is over-generalizing the citizens of New Orleans as a whole since many families had no means of getting out of the city. However, even those families made important decisions regarding their own safety to, for example, stock up on necessities.


of Hur ricane Katrina Practicality and Principle With the appropriate individuals identified, we set off on a harder mission to determine whether these individuals should be the ones making decisions that effect everyone involved. After glancing briefly at the list of important leaders, it seems reasonable that family members should be in charge of their own families and that political leaders should handle decisions affecting a larger population since after all, the population directly elected many of the political leaders. However, after adding additional factors into the equation, these solutions become more complex and create a bioethics issue. For example, if the only way for one family to survive was to rob a second family’s store, are the actions of the first family justified? What if the second family needed the store in order to survive? In this instance, should the families be allowed to make their own decisions? There is no unbiased way to judge who is correct in this situation, and to make an actual decision would be cruel to the other family.

The picture only gets more convoluted when thousands of families are added, along with various political and moral views. Because of this, certain guidelines must be introduced in order to judge whether a certain position is fit for making important decisions. Dr. Harvey Kayman proposes “the Precautionary Principle” as one set of guidelines. This principle contains four components that are important in any bioethics decisions: n n n n

Transparency of decisions (the ability to see how decisions were made) Inclusion in decision making process Accountability Action, even coercive action, must be taken when there is a serious threat to the public’s welfare, often in the face of uncertainty

Most of the components are self-explanatory, and it is easily seen why they are essential when making such complicated decisions.

19


Accountability It is necessary to determine whether the current leaders of society have the ability to adhere to these guidelines. Political leaders most certainly have accountability. Public outcry at certain decisions can lead to resignations such as that of FEMA Director Mike Brown. Even if such drastic actions are not forced, elected officials serve a certain time period before needing to be re-elected. These built-in measures allow for the accountability of political leaders. Likewise, because the leaders are elected by the people, the citizens of the United States are indirectly included in the decision making process. However, political leaders today lack the last two elements of the Precautionary Principle. Government meetings are often held behind closed doors with knowledge that the public may or may not know about. While the final decisions of officials are broadcasted nationwide and documented, the reasoning behind the decision is often hidden. For example, only now are certain officials’ e-mails from late August regarding Hurricane Katrina being released to the public. Finally, coercive action taken by the government in such situations is not really coercive at all. For instance, Louisiana issued a mandatory evacuation of New Orleans shortly before the hurricane hit. Could families have stayed behind if they wanted to? The answer to this question is yes. Likewise, the same analysis holds for members of the non-political group. However, the principle that requires coercive force for the public good does not even make sense with this group. While political leaders’ loyalties rest with their constituents, the loyalty of family members rest only within the family. Because of this disparity in scope, decisions made by these two groups vary significantly. Decisions made by family members during crisis situations have no regard for the “good of the public” but rather only for the survival of the family.

Where Does That Leave Us Since both groups of decision makers are not consistent with the Precautionary Principle, it would be easy to say that neither group seems appropriate to fill this important role. Theoretically, all four components of this principle should be followed when making a decision. Due to the form of government in the United States, it is very difficult to elect or appoint someone to a position that satisfies all the components of the Precautionary Principle. The United States values individual rights and freedom along with a democratic government. While this leads to a high rate of inclusion during decision-making, it reduces the amount of coercive action that the government can take. If, for example, the government was at the opposite end of the spectrum and turned into a dictatorship, coercive action for the public good would be very practical but inclusion in decision making would not be prevalent. Because of this paradox, for all practical purposes, a balance between the two should be found. I believe that the current political leaders embody this balance. Their only limitations come from secrecy and from lack of power, both of which seem reasonable. The country’s secrets should not be given to the entire nation, particularly if they are dangerous or compromise other individuals. Politicians, because of the way they are elected, have the public good as the main motive behind decisions. Even though they do not have the power to enforce their decisions in the field of bioethics, their decisions in general should be more long-sighted and long-term than those decisions made by others closer to the crisis. Their decisions can be classified as informed suggestions but allow other non-political leaders to act as they wish in the case that this decision would do them more harm than good. This article is by no means exoneration for the actual decisions that have been made regarding Hurricane Katrina, however, the correct system is in place to handle important decision-making in intense situations.

References: 1� Think Progress » KATRINA TIMELINE. Think Progress. 4 Nov. 2005. <http://www.thinkprogress.org/katrina-timeline>. 2� Ethics in the Age of Bioterrorism. Center for Public Health Preparedness Grand Rounds Series. New York Network Sunysat. Prod. University of Albany Center of Public Health. Video. 31 August 2005. 3� Brown joked in e-mail as Katrina churned. 3 Nov. 2005. MSNBC. 4 Nov. 2005. <http://msnbc.msn.com/id/9912186/>. 4� Russell, Gordon. Nagin Orders First-ever Mandatory Evacuation of New Orleans. 31 Aug. 2005. Times-Picayune. 4 Nov. 2005. <http://www.nola.com/newslogs/breakingtp/index.ssf?/mtlogs/nola_Times-Picayune/archives/2005_08.html>.

20


CLONING:

a religeous perspective Bilal Farooqi

j

udaism, Christianity, and Islam are all monotheistic religions that trace their roots back to the Prophet Abraham hundred to thousands of years ago. Judaism is the oldest of the three religions, while Islam is the newest, approximately 1400 years old. All three religions are based on the belief in one God and have similar beliefs and baselines in their holy texts; the Torah, the Bible, and the Quran, respectively. The religions’ books and scholars have given some differences and variety within each religion on their opinions of how to interpret their holy books when it comes to positions in the bio-ethics field that are not clearly answered in their texts. These fields in bio-ethics include, but are not limited to abortion, stem-cell research, and human cloning. Cloning is the subject of many heated debates and controversies nowadays. When it comes to human cloning, many people manage to take sides without doing their research on the topic and declare it wrong and unethical without any evidence. However, it is important to analyze the consequences of human cloning from both the positive and negative aspects of it before coming to a decision about one’s opinion about it. In the following paragraphs, we will examine the religious perspectives on human cloning and cloning research in general for Judaism, Christianity, and Islam.

21


Judaism

"

The overall purpose of the Judaic faith is not ethic of rights but rather the ethics of responsibility and the preservation of life. Therefore, for therapeutic purposes, cloning, in general, would be conditionally acceptable.

"

22

Firstly, let us examine the Jewish faith and its perception on cloning. Firstly, the Jewish faith encourages human beings to explore and work for the health and benefit of mankind. In the Talmud, the rabbinic commentaries on the Torah, their holy text, human beings are understood as associates of God in the continuing act of nature. In this distinctive setting, human beings have a divine order for mastery, including the role of searching for and expanding on health and medicine for the benefit of mankind. However, the mandate of mastery of creation comes into question when dealing for cloning in the Judaic faith. The question is no longer mastery over nature and creation, but control over human beings. Nevertheless, the overall purpose of the Judaic faith is not ethic of rights but rather the ethics of responsibility and the preservation of life. Therefore, for therapeutic purposes, cloning, in general, would be conditionally acceptable. For example, sterile woman would be eligible for children with the advancements of human cloning and such a blessing as producing Jewish children to a woman who is unable may fall under the category of acceptable. Jewish author Joshua Lipschutz claims that even though human cloning would produce replicas of other human beings, the uniqueness of the human being would produce each and every clone to have a mind different to that of its parent. Nonetheless, other Jewish scholars are still opposed to the idea of human cloning claiming that it would lead to idolatry, which is strictly prohibited in Judaism. Other than the traditional Orthodox Jews, the majority of Jews have accepted the viability of human cloning if it is done with precaution and responsibility. Overall, the Jewish faith is one of adaptability when it comes to human cloning and its research.

Christianity Now, let us examine the standpoint on human cloning from the standpoint of Christianity. Dr. Panos Zavos, at the University of Kentucky, who some claim to be the father of human cloning, argues that the Bible orders “thou shall not kill,” but claims that nowhere to be found in the Bible is the statement “thou shall not clone.” Many Christians disagree with Dr. Zavos’s ideology. To examine the Christian perspective, one must first realize that Christianity, the world’s largest religion, can be divided into two main sects, mainline Protestant and Roman Catholicism. First, let us examine the mainline Protestant point of view. As far as cloning research goes, Protestants look for a precedent that has been set in the past to answer their questions for


"

Dr. Panos Zavos argues that the Bible orders “thou shall not kill,” but claims that nowhere to be found in the Bible is the statement “thou shall not clone.” Many Christians, how-

"

ever, disagree with Dr. Zavos’s ideology.

the future. The Christian “vocation of freedom” allows for the search of knowledge and scientific in that. Churches have gone even far enough to warrant Christians to “sin bravely” in the hunt for knowledge. Therefore, with potential cloning benefits that lie in future, many Protestant Christians have further advanced their advocacy for research on human cloning. As with Judaism, many scholars fear the power that would be vested in human beings may be too much and it would allow people to play the role of God where they would start manipulating specific features in human beings. Nevertheless, the Protestant Christian perspective is more lenient that any of other school of thought to be discussed in this paper. As far as the outlook on human cloning from the Catholic Church, the stance is much more stringent. In 1987, the Congregation for the Doctrine of the Faith, in the Donum Vitae, an official ruling, denounced cloning and labeled it as being a major sin. It claimed cloning to be against the dignity of “human procreation and of the conjugal union.” According to the Donum Vitae, a “human has the right to be born the human way” and a human born as a clone would not have been given that right. Furthermore, in 1997, the National Conference of Catholic Bishops (NCCB) issued an official statement further condemning human cloning. The NCCB claimed that humans need to have the right to “real parents.” Furthermore, the possible sins of pride and arrogance are reasons for the Roman Catholic Church to try to prevent further cloning research from moving forward.

Islam Islam, like Judaism, has views on cloning that fall somewhere in the middle of the two extreme Christian perspectives. Islam promotes its followers to be in awe and learn about God’s magnificence and creations. Therefore, many scholars have given scientists permission to continue doing scientific research on cloning. However, many scholars urge that cloning research can only be done for the intention of therapeutic purposes, but not for human cloning in general. Human cloning would bring too many questions into play about lineage and legitimacy. Furthermore, in 1997, the Islamic Fiqh Academy proclaimed that cloning as acceptable if done on plants and even animals, but has strictly prohibited it when it comes to human cloning. However, some scholars have suggested that since Islam stresses procreation so much, it may be grounds to allow sterile people to finally have children, yet this idea has not been accepted by the majority of scholars because of the intervention of a third party in the creation of a child between two parents. Furthermore, like Judaism and Christianity, many scholars fear scientists becoming too prideful and arrogant almost to the role that they are playing God 23


"

Many Islamic scholars have given scientists permission to continue doing scientific research on cloning, but urge that cloning research can only be done for the intention of

"

therapeutic purposes.

when cloning. Additionally, self-interest and greed plays a role that many people believe may become a person’s primary concern when dealing with clones. Overall, the accepted ruling thus far has been cloning research should continue and cloning on non-human specimens is acceptable, especially for therapeutic purposes, but human cloning, in general, is prohibited.

Conclusions One can clearly see that the religious perspectives on cloning are far from clear-cut. The rulings on human cloning can basically be divided into two sub-categories: perspectives on cloning research and perspectives on human cloning. Overall, the strictest religion on cloning in general seemed to be the Roman Catholic Church in Christianity while the most lenient group was the mainline Protestants also branching out

24

from Christianity. All groups, with the exception of the Roman Catholics managed to find cloning research as acceptable. As far as actual human cloning, the religions managed to be divided as to their views on it. Islam and Roman Catholicism prohibit it while Protestant Christians and Jews do not fully accept it, but rather say there are grounds that exist to where it would be acceptable. Furthermore, one can also see that all the monotheistic religions examined here have the fear that scientists may become money-driven or even worse, becoming prideful in their work leading them to playing the role of God. The perspectives listed by all of the religions examined shows the significance of how important cloning is, whether it should be allowed or not. Now, since, we at Hopkins are part of the future of scientific research, it is important upon us all to come to our own conclusions after doing research on whether cloning, human cloning to be more specific, should be allowed, ethically and morally. You be the judge!


Methylmalonic acidemia Hypothyroidism

The Injustice of Newborn Screening At right, a partial list of disorders and conditions for which there exist newborn blood tests.

Why aren’t we testing every newborn?

Arginosuccinic acidemia Maple Syrup Urine Disease Carnitine palmityl transferase deficiency type II Carnitine palmityl transferase deficiency type I Mitochondrial acetoacetylCoA thiolase (beta-ketothiolase) deficiency Hemoglobinopathy Toxoplasmosis Methylmalonic acidemia (mutase deficiency) Galactosemia Tyrosinemia type I Carnitine uptake defect

Defne Arslan

Tyrosinemia Adrenal Hyperplasia

Medical ethics essentially encompasses four basic principles: autonomy, beneficence, non-maleficence, and justice. Justice within the United States healthcare system has consistently been an unresolved issue, particularly in relation to the treatment of infants. Newborn screening was developed in 1965 and has since become a common practice in the United States; it is conducted in every state and currently encompasses over forty diseases. With the development of the Guthrie card, a method of collecting blood samples from newborns, and complete implementation of the blood spot screening test in the 1980s, over four million newborns have been screened each year, and thousands of infants have been treated as a result of their early diagnosis. The injustice of newborn screening lies within its availability. Though tests have been developed for over forty inherited disorders, not all of these tests are offered in every state. Maryland, on one end of the spectrum, offers newborn screening for over forty diseases, while Texas offers fewer than ten. Is it ethical to inconsistently offer newborn screening tests throughout the United States? According to the four basic principles of medical ethics, this would be a violation of justice. Why, then, is newborn screening not offered nationally? It is possible that complete newborn screening is not offered nationally because of the repercussions associated with testing; some screening methods can potentially cause harm. For example, a false positive result can lead to great emotional distress and, in some cases, depression. Many newborn screening tests are comprised of two steps. The initial test is conducted on the Guthrie card blood spot collected at birth. If this test yields a positive result, then the patient’s pediatrician orders another test to either confirm or reevaluate the previous test. A false positive results when the initial test is positive, and the second test is negative. A study performed at the University of Wisconsin, Madison tested the “psychosocial risk associated with newborn screening of cystic fibrosis.” The study concluded that parental reaction to a positive newborn screening test depended on a variety of factors, including prior knowledge of the disease, physician’s method of delivery, and experience (first child, second child, etc.). Though

LCHAD (Long-chain L-3-OH acyl-CoA dehydrogenase deficiency) Multiple acyl-CoA dehydrogenase deficiency Cystic Fibrosis Propionic acidemia Multiple carboxylase deficiency Isovaleric acidemia Tyrosinemia type II Biotinidase Deficiency Phenylketonuria 3-OH 3-CH3 glutaric aciduria Citrullinemia Glutaric acidemia type I Carnitine/Acylcarnitine translocase deficiency Biotinidase deficiency SCAD (Short chain acyl-CoA dehydrogenase deficiency) Homocystinuria VLCAD (Very long-chain acyl-CoA dehydrogenase deficiency) Homocystinuria MCAD (Medium chain acylCoA dehydrogenase deficiency) 3MCC (3-Methylcrotonyl-CoA carboxylase deficiency) Trifunctional protein de-


the reactions varied, most parents experienced feelings of emotional distress and guilt and displayed symptoms of depression after hearing of their child’s positive cystic fibrosis result. At first glance, it looks as though newborn screening could contradict the principle of non-maleficence in medical ethics, due to the possibility of a false positive. However, it is important to recognize that “despite their distress, most [of the patients in the Myers study] (90%) were supportive of [newborn screening] for [cystic fibrosis], with a belief that early detection and intervention would improve the health and welfare of affected children.”

tion of these costs are due to the implementation of testing in general, and not due to the number of tests performed. According to Wilcken, “the incremental cost of including an extra disorder…is extremely small, and relates only to the follow-up, and not to the testing process.” Therefore, the difference between performing six tests and seven tests is inconsequential. Wilcken also adds, “one inexpensive test can detect up to at least 30 disorders,” and so the costs of newborn screening are actually reduced with more advanced testing techniques. For example, Kee Chan MS of the National Institutes of Health estimates that the addition of SCID to newborn screening would only cost an additional three dollars and, as The injustice of newborn screening lies within its availabil- mentioned earlier, would increase the of survival for nearly all those ity. Though tests have been developed for over forty inherited chances born with the disease. So, it seems as disorders, not all of these tests are offered in every state. though increased cost is not a legitimate reason for the discrepancies in newborn Maryland, on one end of the spectrum, offers newborn screen- screening across the nation. Regardless of any such cost reducing for over forty diseases, while Texas offers fewer than ten. tions or mass benefits, it seems even more important to consider basic medical ethics in determining whether or One must recognize the benefits of testing in order to weigh the not to expand newborn screening. In the simplest terms, the importance of the risk associated with a false positive. inconsistency in newborn screening from state to state is a violaThe benefits of newborn screening can be illustrated with one tion of justice. It is unjust to screen regularly for forty diseases in condition called Severe Combined Immunodeficiency (SCID), one state and only ten diseases in another. Although patients are for which testing is not yet offered. SCID is a genetic condition able to request supplemental screening by private organizations, marked by the absence of T-cell production and, as a result, a these tests could cost an additional $25 to $100. So in the end, non-functional immune system. “Infants with SCID die of infec- newborn healthcare is not based on the equality by which meditions in the first year of life unless immunity is reconstituted by cal ethics is built, but rather on the affluence of each individual bone marrow transplantation, enzyme replacement, or, in some family, or even the arbitrary location in which the child happens recent cases, gene therapy.” Patients with SCID do not show to be born. Newborn screening must be consistent throughout any symptoms at birth, so they are often untreated until serious the nation because medical ethics encompasses the idea of jushealth complications arise. Those patients who are treated early tice, and justice encompasses the ideal of equality irregardless of have a significantly higher chance of survival, as discussed by state borders. Kee Chan, MS, and Dr. Jennifer Puck, MD, of the National Institutes of Health. Chan says, “Myers et al. reported 95% survival among infants undergoing transplantation in the first month of References: life compared with only 70% for those treated after 3 months.” In 1� Wilcken, Bridget (14 November 2003) Ethical Issues in Newborn Screening and the impact of new technologies, Springer-Verlag: 62-66. this case, it is clear that early testing would benefit those patients 2� Health Consulting Group Inc. (2003) On the Front Line, Promoting the Health of with SCID. Some argue that the low incidence of SCID decreases America’s Newborns, Association of Public Health Laboratories: 1-7. its benefit; the false positives may outnumber those who actually 3� Goldfarb, Zachary (2005) Newborn Medical Screening Expands, The Wall Street Journal. have the disease. However, the rate of incidence seems almost 4� Tluczek, Audrey, Koscik, Rebecca L., Farrell, Philip M., Rock, Michael J. (6 June 2005) arbitrary in deciding which tests to implement. The estimated Psychosocial Risk Associated with Newborn Screening for Cystic Fibrosis: Parents’ Experience While Awaiting the Sweat Test Appointment, Pediatrics Vol. 115 No. 6: incidence of SCID is around 1:100,000 births (excluding those 1692-1703. who die before diagnosed), which seems low compared to that 5� Chan, Kee, Puck, Jennifer M. (February 2005) Development of Population-Based Newborn Screening for severe combined immunodeficiency, Journal of Allergy and of Phenylketonuria, a widely screened dietary disorder with an Clinical Immunology Vol. 115 No. 2: 391-398. incidence of 1:15,000 births. However, Homocystinuria, a meta- 6� Wisconsin Department of Health and Human Services (28 November 2005) Newborn Screening Program, Available at: http://www.dhfs.state.wi.us/DPH_ BFCH/ Newbolic disorder, has a rare occurrence of only 1:344,000 (internaborn_Screen/NBSCost.htm. tionally), and yet it is a required screening test in 35 states. 7� Bartoshesky, Louis (August 2004) Newborn Screening Tests, Nemours Foundation, Perhaps the inconsistencies in newborn screening across the Available at: http://kidshealth.org/parent/system/medical/newborn_ screening_ tests.html. nation are due to cost issues. In Wisconsin (which currently tests 8� Mobley, Natalie A. (Fall 2003) Caught in the Cradle: The Call for Newborn Genetic for over 40 diseases) the cost for screening is $65.50, and will inScreening, Hybrid Vigor, Issue 5, Available at: http://www.students. emory.edu/HYBRIDVIGOR/newborn.htm. crease to $69.50 by January 1, 2006. For those with health insurance, this cost is minimal, however many people without health insurance could find this cost to be a great burden. Hospitals may also feel the effects of further costs with the addition of new tests; “A state may face a choice between expanding newborn screening and ensuring that all expectant mothers get sufficient prenatal care.” It is important to note, however, that a good por26


Do-Not-Resuscitate Orders

Excerpt from the Hippocratic Oath, taken by new physicians: “…I will follow that system of regimen which, according to my ability and judgment, I consider for the benefit of my patients, and abstain from whatever is deleterious and mischievous. I will give no deadly medicine to any one if asked, nor suggest any such counsel…”

Manuel J Datiles IV With a large part of America’s population rapidly aging, the intense debates over physician-assisted suicide and euthanasia are growing in national importance. But another crucial end-of-life issue, which is often overlooked in these debates, is the do-not-resuscitate order, or DNR.

What is a DNR? The do-not-resuscitate order, or DNR, is a written order from a doctor that resuscitation should not be attempted if a person suffers cardiac or respiratory arrest. This is sometimes known as a no-code order. Such an order may be asked for

27


by the patient, or from someone entitled to make decisions on their behalf, such as a health-care proxy; in some jurisdictions, such orders can also be made by the attending physician, usually when resuscitation would not alter the ultimate outcome of a disease. A DNR is often ordered when a person with an inevitably fatal illness does not wish to prolong the suffering, and wishes to have a more natural death without painful or invasive medical procedures. It should be emphasized that a DNR order is not euthanasia, in any form; euthanasia, whether it be a physician-assisted suicide performed by an arguably compassionate doctor on a suffering patient, or the withdrawing of a nutrient tube from a comatose patient, always involves the active termination of the patient. Euthanasia is the (arguably) “merciful” killing of a patient. A DNR, by contrast, merely states that a patient who has in fact essentially already died (i.e., completely lost both heartbeat and breathing functions) should not be revived with CPR or ACLS (advanced cardiac life support; i.e., an electric-shock defibrillator). A DNR order should only be placed if three criteria are fulfilled, according to the 1976 DNR guidelines set by the Massachusetts General Hospital clinical care committee: (1) The disease is “irreversible,” meaning no known therapeutic measure can effectively reverse it; (2) The physical status of the patient is “irreparable,” meaning that the illness has progressed beyond the therapeutic abilities of current medicine - for example in terminal-stage cancer or AIDS, in which no realistic recovery can ever be expected; and (3) The patient’s death is “imminent,” meaning that in the ordinary course of treatment, the patient will probably die within the next two weeks. How common are DNR orders? In the United States, DNRs and similar orders have become commonplace in medical institutions. For example, in the United States, 85% of all deaths in 1988 occurred in health-care institu-

28

tions (hospitals, nursing homes, etc.). Out of all of these deaths in health-care institutions, 70% involved electively withholding some form of lifesustaining treatment. Another study conducted in 1989 found that 85%-90% of critical care professionals said they were withholding or withdrawing life sustaining treatments from patients who were “deemed to have irreversible disease and are terminally ill.” It is sobering to discover that, out of the approximately 2 million deaths in America each year from all causes, 1.3 million of these deaths follow decisions to withhold life support. Are DNR Orders ethical? In order to attempt to answer this question, let us first clearly state a basic ethical principle – the principle of the sanctity of human life – which must be agreed upon in order to even start the argument: The act of intentionally killing a human being who has no wish to die is absolutely wrong, and constitutes murder. Given this statement, then, are DNR orders morally acceptable? Dr. Faden and Dr. Pellegrino of Georgetown University Medical Center argue that if the patient is already terminally ill, and consents to the order, then yes, DNR orders can be ethical, in their book, Jewish and Catholic Bioethics. Interpreting Jewish doctrine, they state that a voluntary order to not resuscitate is “not considered active termination of life but merely removal of obstacles to the departure of the soul [italics added]”. And the terminally ill patient’s wishes to pass away must be respected, argues Raphael Cohen-Almagor, since, …death is not the worst possible outcome compared to being on the verge of death and then being stabilized without hope of ever really getting better. Patients who suffer from incurable diseases such as cancer may feel that their lives have become transient and that the thought of death brings more comfort than alarm.


They may feel that their dignity, their autonomy, their humanity will be better served if they are allowed to die. These patients’ wishes must be respected. [italics added] If efforts to preserve life are increasingly futile, and death is certain and imminent, “even simple life-sustaining measures can be withheld because these efforts could be futile,” admits Robert Barry in his Sanctity of Human Life and Its Protection. The key to this apparent consensus of both conservative and liberal scholars, philosophers, and medical professionals lies in the fact that a DNR order does not entail the active killing of a patient, but can be seen as simply letting a suffering patient end their fight against a disease in peace. However, it is understandable if one is still unsettled by the thought of a doctor intentionally letting a patient die. After all, the core principle of the medical profession is to do everything humanly possible to save the lives of human beings. Thus, some argue that to condone even the allowance of death may lead to a gradual erosion of the ethics concerning death. This “slippery slope” argument contends that, although a DNR order seems like a merciful thing to do, it makes …killing “too easy,” so that doctors would turn to it for reasons of bias, greed, impatience, or frustration with a patient not doing well; that it would set a dangerous model for disturbed young persons not terminally ill; and that in a society marked by prejudice against the elderly, the disabled, racial minorities, and many others, and motivated by cost considerations in a system that does not guarantee equitable care, “choices” of death that are not really voluntary would be imposed on vulnerable persons… Although this fearful rhetoric may sound overly cynical and apocalyptic, it is not entirely without basis…

Can Doctors Make the Choice For You? Doctors across the country are increasingly recommending and ordering DNR orders for older and sicker patients. According to Dr. Marjorie Zucker, “if a patient lacks decisional capability, a DNR order can be made by the patient’s agent, or if there is no agent or surrogate, the decision can be made by the treating physician in concurrence with a second physician.” Although this makes some sense – a terminally ill, dying patient with no chance of survival could be spared unnecessary pain – some doctors take the decision into their own hands, sometimes going so far as ignoring or bypassing the wishes of the patients or their loved ones. In 1995, Mrs. Catherine Gilgunn, a 71-yearold woman at Massachusetts General Hospital, experienced a series of repeated seizures that caused irreversible brain damage and left her in a coma, on life support, in the intensive care unit at Massachusetts General. Her physicians “recommended a DNR order. Mrs. Gilgunn’s daughter objected, claiming that her mother had always wanted every medical procedure done to keep her alive as long as possible, and insisting her mother’s wishes be honored. Ultimately, a DNR order was written… without the daughter’s knowledge. Mrs. Gilgunn…was weaned from the ventilator and died shortly thereafter of respiratory distress.” The case was brought to court, and the judge sided with Massachusetts General Hospital, ruling that the physicians were “not guilty of neglect or imposing emotional distress when they acted against Mrs. Gilgunn’s acknowledged wishes to sustain her life.” In The Least Worst Death, Margaret Battin notes that “official policies require that the patient – if competent – or his or her legal guardian be consulted before nonresuscitation orders are written. But such directives are by no means always followed.” Battin then goes on to describe some alarming examples:

29


A physician reminded the granddaughter of an alert, competent eighty-nine-yearold nursing home patient, “You can always have ‘do not resuscitate’ orders written into her record.” “Why don’t you ask her if that’s what she wants?” was the granddaughter’s shocked reply.” A cardiologist at a major university says, in contrast, that he would not make such a suggestion to the family – because he “wouldn’t want to put them through that”; this physician writes no-code [i.e., DNR] orders on his own, without consulting either patient or family. In some places, no-code orders are written in pencil, so that they can be erased from records if desired [i.e., to escape any lawsuits]; or circumlocutions not intelligible to laypersons are used [to confuse patients] - “consult primary physician before initiating treatment.” A more recent study had similar findings: a 1994/1995 study of 167 intensive-care units – specifically, all of the ICUs associated with U.S. training programs in ‘critical care’ medicine or ‘pulmonary and critical care’ medicine – found that in 75% of deaths, some form of care had been withheld or withdrawn. This means that nearly all of the nation’s future doctors are being taught

to practice the withholding of care, even if only in certain circumstances. This may not bode well for the aging patients whose lives rest in those doctors’ hands! Conclusion Today, the Do-Not-Resuscitate order is used pervasively in America’s emergency rooms, intensive care units, and nursing homes. Although the DNR is seen by many as the merciful act of ending a dying person’s suffering, there is the terrible possibility that a patient may be allowed to die against their wishes. Even though in many cases a DNR may be desired by a patient, in some cases doctors take matters into their own hands, perhaps causing the premature death of countless human beings. In a sense, this order is a microcosm of the power that a doctor holds over his patients: their lives, their very existence rests entirely in his hands. Let us hope that the doctors and medical professionals of today and of the future will reflect upon their duties and decisions with the utmost of care: Although the DNR allows one to gently let a life slip away, there will never be a way to recover even a glimpse of a human soul.

References: 1� 2� 3� 4� 5�

http://encyclopedia.laborlawtalk.com/Do_not_resuscitate Basta, L.L. A Graceful Exit. Chapter 5, p.82. Insight Books: New York, New York. 1996 Battin, M.P. Ending Life: Ethics and the Way We Die. Chapter 2, pp 48-49. Oxford University Press, Inc. New York. 2005. New York Times, July 23, 1990, A13. Pellegrino E. and Faden A. Jewish and Catholic Bioethics: An Ecumenical Dialogue. Section Two, p. 49. Georgetown University Press, Washington, D.C. 1999 6� Cohen-Almagor, R. The Right to Die With Dignity. Chapter 5, p. 111-112. Rutgers University Press, New Jersey. 2001 7� Barry, R. Sanctity of Human Life and Its Protection. Chapter 6, p. 166. University Press of America, Lanham, MD. 2002 8� Battin, M.P. The Least Worst Death. Chapter 9, p. 193-194. Oxford University Press, Inc. New York. 1994. 9� Zucker, M.B. The Right to Die Debate. p. 129. Greenwood Press, Westport, CT. 1999. 10� Rubin, S.B. When Doctors Say No: The Battleground of Medical Futility. p.27. Indiana University Press. Bloomington, Indiana. w1998.

30


New Approaches to the

Stem Cell Controversy Gandhi Vallarapu

While stem cell research has become a controversial issue in American politics, it often is not as polarized as the media tends to portray it. Stem cell research involves collecting stem cells from among other places, the developing human embryo. These undifferentiated cells have the capability to differentiate into various other cell types. Because of this property, stem cells are believed to possess the ability to cure multiple illnesses that have plagued humanity, such as Alzheimer’s. The most controversial aspect of stem cell research involves harvesting embryonic stem cells. The debate lies in the bioethics of what constitutes a human life. By harvesting stem cells from blastocysts that could potentially become humans, researchers tread the dangerous issue of where life begins. Michael Gazzaniga, head of Dartmouth College’s cognitive neuroscience program believes, “Up to 14 days, you don’t have a creature with a brain in it, so you can’t even consider it to be, say, brain-dead.” But what if stem cells could be harvested from natural body parts that are not destined to become human beings? Scientists could prevent the bioethical dilemma of whether they were killing a human from arising. William Hurlbut, a bioethicist from Stanford, adamantly believes he has found a solution to this perplexing problem. Stem cells are traditionally collected from a developmental stage of the embryo called the blastocyst. While not a human in a biological sense, this entity has the potentially become one. Many opponents of stem cell research believe that interrupting a fertilized egg cell is disrupting the process that 31


can create a new human. By preventing embryos from developing, scientists are “killing” potential individuals. Some opponents of stem cell research also contest with researches utilizing discarded blatocysts left over form in vitro fertility therapy. They feel this is making a developing human an instrument. Chris MacDonal, Ph.D., an ethicist at Dalhousie University’s Department of Bioethics, states many people are, “[…]arguing that while the early embryo clearly is not a person (and so clearly does not warrant the ethical status of a human adult or child) it is a part of the human life-cycle, part of the human story, and so ought (like a human corpse) to be treated with a degree of respect.” Using blastocysts is shrouded in controversy. What if stem cells could be obtained without utilizing a blastocyst that could potentially become a human? Mr. Hurlbert learned about Janet Rossant’s experiment at University of Toronto. Rossant intentionally formed mouse embryos that lacked CDX2, a gene that produces trophectoderm, which creates skin on a blastocyte. Without trophectoderm, the embryo would fail. Despite this absence, stem cells could still be obtained from the modified embryo. What if human embryos could be produced lacking a critical gene? This would ensure its failure, thought Hurlbert. If an embryo was destined to fail, could it be used for stem cell research purposes, without any moral consequences? At the March meeting of the President’s Council of Bioethics, Hurlbert defended his innovative concept before such influential leaders as Bush’s appointed chair, Leon Kass. Hurlbert proposed that researchers implant a genetic “glitch” into a single cell and allow it to multiply, and this would allow the cell to exponentially replicate the “glitch.” By implanting the error early,

32

Hurlbert claims, “There’s no embryo there.” He believes these cells are being subjected to “preemptively alteration.” He firmly believes there are no bioethical implications if an embryo fails to exist that can viably become a fully functional human. The “biological artifact” would fail to exist as an embryo. The council was not as accepting of Hurlbert’s proposal. It was vehemently opposed. Notable critics from the council voiced their opinions. Paul McHugh, the psychiatrist in chief at Johns Hopkins went as far as to proclaim, “I share the idea that it’s a kind of pollution of the human genome.” Various council members even were dubious as to how to define these new beings. Charles Krauthammer adamantly declares, “You call it an entity; I see it as a creature. That’s why I’m repelled by it in principle.” Council member James Q. Wilson declared, “We can’t even adequately describe these things. We’re inventing names as we go along.” The infancy of this concept of a partial human does not rest well with the council members. Colloquially, council member Paul McHugh denounced the idea by describing it as, “a pollution of the human genome that I have a yuck factor towards.” Hurlbert defends his initiative by claiming the purpose of stem cell research is to be able to manipulate human tissue development. By creating “biological artifacts,” we will be able to utilize stem cells to “grow human parts apart from bodies.” He believes this is no more disturbing than utilizing disease for vaccination purposes. He firmly states that by introducing the defect in the single cell stage, the embryo will never truly exist. It would not constitute embryonic murder. This fact circumvents any oppositional arguments based on the humanity of his procedure. The dilemma of the issue lies as to


whether it is bioethical to introduce the error and if so, where the time boundary lies on implementing the error. Who is to decide how long the embryo can develop before introducing the inaccuracy? I believe if properly regulated, the concept of utilizing modified undeveloped embryos would be a viable source of stem cells. I agree with Hurlbert’s proposal that the modification be implemented while it is still a single cell;

“By fast-forwarding to the gene expression tent cell we would bypass the embryo and ferentiated cell type.” this way, the flaw would replicated from the inception of the entity. The flaw should not be inhumane or drastic. A minor change to one gene would force the cell to never enter the realm of an embryo. This would avoid prevent the ethical implications of tampering with an embryo. Mr. Hurlbert describes this process: “By fast-forwarding to the gene expression pattern of a pluripotent cell we would bypass the embryo and go

directly to a differentiated cell type.” In my view, inserting the error in the single cell stage of development would be bioethical. The creation of regulation would be a more enigmatic problem. With the potential to easily cross the line of immorality, altered nuclear transfer, as Hurlbert calls it, creates a huge conundrum as to who will regulate it. The federal government could regulate the research of altered nuclear transfer by strinpattern of a pluripo- gently regulating its procedures. The go directly to a dif- government could implement incentives for researchers to follow these moral guidelines in the form of research aid, and for those who refused to comply, heavy fines. The federal government could lead America on the path to greater medical understanding, without the destruction of embryos. Mr. Hurlbert might have created the solution to the great stem cell debate, and with greater research, time, and effort scientists will be able to fully understand the development of differentiated cells.

33


Science and the

As the discoveries of modern science create tremendous hope, they also lay vast ethical mine fields. As the genius of science extends the horizons of what we can do, we increasingly confront complex questions about what we should do.

- President George W. Bush

Bryce Olenczak In August of 2001 President George W. Bush addressed the American people with these words when he announced his stance on the issue of embryonic stem cell research. The Bush administration believes that the ethical implications of scientific research should be weighed before studies are conducted, and if the ethical pitfalls outweigh the potential progress, the research should not be conducted. Yet, even when morally sound scientific research does exist, the White House seeks to suppress and alter the results of the research if it conflicts with their policies. The administration, which presents itself as a champion of bioethics, has made repeated attempts to undermine the quality and legitimacy scientific research that is conducted and funded by the federal government, and relevant to the health and well-being of the American people and the global environment. It is reasonable that policy makers wish to promote their objectives while pointing out the inadequacies of their opponents goals. However, the Bush administration has sought to eliminate opposition by crippling the effectiveness of federal agencies that have published influential scientific findings that are at odds with White House policy. Credible scientific research that demonstrates flaws in the administrations views on global warming, environmental policy, and public health policy has been unethically undermined by several Bush administration tactics. Research has been wrongly altered prior to publication,

34

stonewalled during the publication process, misrepresented by political appointees, and publicly disregard following publication. The credibility of the Environmental Protection Agency (EPA) has been compromised on several occasions through alterations of research findings by White House officials. In an annual report on global warming, findings by the American Petroleum Institute, an advocacy group for the petroleum industry, were substituted for scientific research conducted by EPA. The EPA eventually decided to remove the entire section on global warming rather than publish prejudiced research findings. In another similar instance, it was discovered that rules proposed by the EPA for reduction in mercury emissions by coal burning power plants contained several paragraphs from legal documents prepared by the coal power industry. Often, additions to EPA documents are made by political appointees to the agency, even if the appointees have no scientific background and did not participate in the studies. The Bush administration has made extensive use of political appointees at federal research agencies to undermine the quality of supposedly objective scientific findings. Political appointees of the Bush Administration have also attempted to suppress the publication of research, particularly findings that are critical of agriculture and industry. Under pressure from the Bush White House, administrators at the U.S. Department of Agriculture (USDA) went so far


White House as to issue an internal memo that instructed its scientists to seek approval of superiors before publishing research that identified the negative impact of agricultural and industrial practices on the environment and human health. Even more threatening to the nation’s scientific standards, the White House’s Office of Management and Budget proposed changes to the practice of peer review. Instead of having ones research reviewed by federally funded colleagues prior to publication, the proposal called for review by industry scientists if scientific assessments would have a significant cost impact. This proposal was vehemently opposed by the scientific community and its suggestions were scaled back. If this proposal had been instated it would have given industry the ability to oppose the publication of articles that are critical of its practices. Not only has the effectiveness of agencies charged with conducting scientific research been diminished by altering and suppressing research, it has also been reduced by appointing advisors who are partial to personal beliefs irrespective of scientific findings. The EPA, USDA, Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), and Department of Health and Human Services (HHS) all have presidential advisors who are charged with representing the views of scientists. However, the advisor is not selected by the scientists of the agency, but instead the agency director, who is appointed by the president. Advisor positions exist to convey the general consensus of the scientific community to the president, as well as to report new findings that are significant to White House policy. However, the method of appointment allows a means by which the president will only hear research supporting his views, with and alternative re-

search hidden or altered. During several high profile press conferences and speeches President Bush and White House representatives have minimized the significance of research findings that are critical of government policy. These attempts to create the perception that no general consensus exists among scientific experts is a gross misrepresentation of the truth with regards to global warming and other environmental polices. Recommendations made by the National Academy of Sciences and the American Geophysical Union about global warming have been ignored as scientifically inadequate, and the White House has repeatedly stated that not enough research exists to warrant reduction in fossil fuel emissions. Even when critical research has overcome the internal obstacles of alteration, suppression, and misrepresentation it is vulnerable in the public arena to the misperceptions furthered by the Bush Administration. It is in the best interest of the American people that independent, credible data be available to the public so that citizens can be aware of policy that has negative health and environmental consequences. Yet, the Bush administration has effectively hindered the public’s access to and understanding of scientific findings reached by government agencies. It is certainly disconcerting that the White House has adamantly portrayed itself as a guardian of bioethics even though it is in fact the greatest threat to ethics of government funded scientific research.

References: 1� Smith, Daniel. “Political Science.” The New York Times 4 September 2005 2� ACLU. “Science Under Siege” 20 June 2005 http://www.aclu.org/Privacy/Privacy.cfm?ID=18534&c=39. 3� Union of Concerned Scientists, “Scientific Integrity in Policymaking: an Investigation into the Bush Administration’s Misuse of Science.” March 2004 www.ucusa.org.

35


What is

a Person?

A Multifaceted Look at Personhood Jason Liebowitz

36


D

uring March 2005, the entire nation and much of the world was gripped by the story of Terry Schiavo, a Florida woman on life support whose feeding tube had been removed for the third time. Since 1998, when her husband first asked the courts to decide whether or not life support should be removed, Mrs. Schiavo’s future had been the topic of many legal briefs as the family, the physicians, and the government quarreled over what course of action to take. With the passing of Terry Schiavo appearing to be imminent, the conflict erupted with widespread media coverage, protests outside the hospice, and congressional legislation challenging the courts’ labeling of Mrs. Schiavo’s condition as a persistent vegetative state (PVS). Arguments raged

“The absence of such abilities as those of reasoning and analysis....does not alone imply death.” over patient autonomy, the quality of life, legal authority, and nearly every other relatively germane topic. However, instead of focusing on all the controversy, the Schiavo case can be seen as a rare opportunity to philosophize on a question that likely comes to mind, an inquiry that always has and continues to beg an answer: what is a person? To question anyone’s personhood may seem confusing and unnecessary to some, and even morbid and offensive to others. These natural tendencies to cringe at the topic may contribute to its exclusion from public debate. Yet, to simply accept personhood as a given is not only misleading, but also disrespectful to the countless scholars who have examined the issue from nearly every perspective. The problem of personhood is made timely by Mrs. Schiavo’s circumstances, but the true benefit of discussing this matter lies in tackling a subject that humankind must face everyday. This article does not hope to provide an absolute answer to the question of personhood, but it will seek to provide many different possible interpretations as seen in research, treatises, and other mediums.

History of Personhood Since the practice of examining personhood is one with roots centuries in the past, it is worthwhile to discuss the views of some philosophers of antiquity. Plato writes that the soul, as opposed to the body, is at the very core of human personhood. The soul, in his opinion, provides independent self-determination, a concept which is best illustrated through analogy. The “chariot” of the soul is led by two horses, one representing noble spirit and the other, malfeasant appetite. It is thus left to the driver of the chariot to control the two horses, a symbolic gesture of reasoning that shows how Plato’s concept of personhood is very much predicated on self-awareness and the analytical ability to decide what is the moral course of action. Aristotle also directs his attention to the soul, which he says all living organisms possess. However, what makes the human soul different from those of animals is the expression of imagination and intellect, whether that intellect exists before encountering certain experiences (passive intellect) or after having these experiences (active intellect). René Descartes simply states that human beings are persons with a personality based on “cognitive power,” which is the capacity to combine imagination, sensation, understanding, and memory in an expression of free will. These are but a few perspectives on personhood from long ago, yet many of the core criteria for personhood are shared among these beliefs and are later found in more contemporary writings.

Brain Death Before delving into the complex metaphysics of personhood, perhaps it is best to first understand how death is legally defined in the United States and to see how lacking personhood is not necessarily the same as being deceased. According to the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, neurological death is equivalent to “the cessation of the vital functions of the entire brain—and not merely the portions thereof, such as those responsible for cognitive functions” (262). Essentially, such a definition is referenced as “whole-brain death.” The Commission’s writings make one thing very clear: the absence of such abilities as those of reasoning and analysis, which are very much a part of the discourse on personhood, does not alone imply death. In other words, a patient is still very much alive so long as spontaneous (not machine-aided) heartbeat and respiration are still occurring since this indicates that the brain stem and neocortex are still functioning. This point must be understood so as not to confuse the discussion of personhood with policy debates or political implications; personhood and being alive may be independent of one another or fundamentally connected, but this article simply observes the more abstract criteria for personhood. 37


Personhood Independent of Species Can only human beings be considered persons? Although the intuitive answer is yes, philosophers like Michael Tooley reject such interchangeability of terms. It is not an uncommon claim that what separates people from all other organisms is our intelligence—so is this the person’s defining characteristic? In his article Computing Machinery and Intelligence, the English mathematician Alan Turing outlines his “Turing Test,” in which a human interrogator is connected through a terminal to another human and, separately, to a machine. The interrogator cannot see, hear, or in any way interact with the human and the machine except by asking questions via the terminal. If, in the end, the interrogator cannot tell the difference between the human and the machine, then the machine is “intelligent.” Under the intelligence definition of personhood, the machine is now a person, and it becomes clear that any entity that can meet this same objective is also a person. Similarly, Dr. John Harris, professor of bioethics at the University of Manchester Institute of Medicine, explains that, if humans were to search for persons inhabiting other planets, it would not be very reasonable to look for organisms that are identical to humans or even of the same organic composition. Instead, humans must think more introspectively and consider how they would defend their personhood if it were ever questioned. Dr. Harris reflects on philosopher John Locke’s criteria for personhood (i.e. intelligence, reasoning capabilities, self-consciousness, etc.) and concludes that “anyone capable of valuing existence, whether they do or not, is a person in this sense,” including intergalactic visitors (303).

Implications of Personhood Many of the greatest bioethical quandaries today may deal with moral obligations and rights on the surface, but more often than not the issue of personhood is found directly at the core of the dilemma. Some of the most conflicted questions of personhood are related to an especially controversial subject, namely that of abortion. Depending on how strict the definition of personhood is, the fetus may or may not be considered a person. Of course, some people consider the fetus a potential person as opposed to a normal person, but the issue of personhood still applies. It is important to note that beyond the fetus, the question of personhood may still apply to newborn or young children (who may or may not be “self-conscious”). At the other end of life, those suffering from debilitating dementia face similar issues about being persons, even if these individuals did not face these concerns in previous years. If PVS victims and coma patients are added into the discussion, it becomes obvious that personhood is not simply a question of “what” but maybe more specifically of “when.” In other words, these various scenarios address the issues of former 38

and prior persons and reveal the potential for a single individual to be considered a person during one part of life and not a person during another part of life. What about cases in which, arguably, the mental faculties necessary to be deemed a person never develop? Do the severely mentally disabled qualify as persons under definitions that rely heavily on cognitive abilities? These individuals also reveal a potential fallacy of a key component in the personhood definition: external observers having to consider the internal self-awareness of humans other than themselves. The ability to appreciate one’s own consciousness is not so easily measured and, in reality, is not truly apparent to anyone besides the actual individual.

So Who Cares? If anything is apparent at the end of this discussion of personhood, it is that the conversation itself is convoluted, contentious, and not so easily ended. However, since considerations of personhood are relevant to bioethical issues from the beginning of life (fetuses, stem cells) to the end of life (dementia, euthanasia), the topic cannot simply be ignored. In fact, when issues of morality come into play, a whole new set of questions arise. Clearly, moral status is laid at the level of personhood, but is this acceptable considering that there is no clear consensus on what a person is? How can treatment of others be deemed moral or immoral when it may not be apparent that these individuals are persons? Moreover, why should moral status be given only to persons? On this point, philosopher Peter Singer argues that a moral wrong is being committed against animals that are used in experimentation, especially since many of these creatures meet more of the criteria for personhood than newborns and the severely mentally handicapped. If many definitions of personhood do not require that an organism be human to qualify as a person, can the subjugation of animals, or what Singer deems “speciesism,” be justified? Perhaps society should not organize its policy around a horizontal, indeterminate threshold called “personhood,” but what is the alternative? With this in mind, one may ask, why even bother talking about personhood if no conclusion can be reached? Well, maybe the asking of this question is the greatest cause for continuing the discourse over personhood. Whether or not philosophers ever unite on the issue of personhood, continually considering this matter is the most precious use of the abilities a person possesses. When faced with tragedies that threaten somebody’s personhood, realize how important it is not to waste the ability to apply reason, analyze the situation, and articulate opinions, independent of whether they are labeled right or wrong. It is terrible that the issue of personhood is generally ignored by the public until a sensational story puts it on magazine covers and television, but perhaps this is nature’s way of reminding humanity that the issue of personhood is one of the most troublesome and, thus, one of the most essential to the life fully examined.


39


Exploiting The Body’s Army: Generating Antibodies to Study Cardiac Muscle Development Jon Edward The Vredenburg Scholarship:

As a Vredenburg Scholar for the summer of 2005, I worked at the Leiden University Medical Center in Leiden, the Netherlands under the direction of Dr. Douwe Atsma and Dr. Twan de Vries. The Whiting School of Engineering awards the Vredenburg Scholarship annually to sophomore and junior engineering students at Johns Hopkins to apply their engineering, technology, and applied science skills and training in an international setting. The scholarship covers most expenses associated with the international experience and gives students the opportunity to participate in international education, investigation, and collaboration. During my three-month stay in the Netherlands, I was able to visit 5 major cities and saw everything from the Eiffel Tower in Paris, to Antoni Gaudi’s Sagrada Familia in Barcelona, to the ruins of Ancient Rome. It was truly an amazing cultural and learning experience and I recommend that all of you apply.

Now To Antibodies...

Antibodies, also known as immunoglobulins, are present in our blood and tissue fluids as well as many bodily secretions. They are responsible for binding to antigens and helping the body fight infection. Outside the body, they are widely used as a form of medical diagnostics. Blood tests can detect mutated antibodies that cause autoimmune disorders such as multiple sclerosis and myasthenia gravis. Additionally, genetically modified monoclonal antibodies are being used in therapies to treat many diseases from rheumatoid arthritis to some forms of cancer. In medical research, monoclonal antibodies are used in a number of assays. They can be fluorescently labeled and used to identify proteins and cell secretions. These antibodies can then be used to track the movement and expression of proteins in animal models.

A Novel Method for Antibody Production:

Traditional antibody production involves fusing an antibodyforming cell from the spleen of an animal to a tumor cell in culture to produce a hybridoma. By allowing the hybridoma to multiply in culture, it is possible to produce a population of cells, which produce antibody molecules. These antibodies are tested and the most effective antibody is chosen and mass-produced in cell culture or in mice. The drawback to this technique is that it is very cumbersome and labor-intensive, so Dr. Atsma’s and Dr. de Vries’ laboratories 40

have adopted a new technique called phage display or biopanning. Biopanning uses bacterial viruses known as phages to produce and select synthetic antibodies. The phages have been genetically engineered so that an antibody is attached to the outside of the phage and the gene that encodes the antibody is inside the phage. In order to select the phage with the desired antibody, a phage liMonoclonal antibodies- Antibodies that are identical because they were produced by one type of immune cell, all clones of a single parent cell Hybridoma- A fused hybrid cell that can multiply rapidly and indefinitely to produce large amounts of antibodies Elisa- A sensitive assay that uses an enzyme linked to an antibody or antigen as a marker for the detection of a specific protein, especially an antigen or antibody. Ligation- The process of joining together chemical chains (as of DNA or protein). In this context, each myocardin fragment was joined with a pET-28 vector. Vector- A bacteriophage, plasmid, or other agent that transfers genetic material from one cell to another. His-tag- An amino acid motif in proteins that consists of at least six histidine (His) residues, often at the N- or C-terminus of the protein. His-tags are often used for affinity purification of polyhistidine-tagged recombinant proteins that are expressed in Escherichia coli. T7 Promotor- a protein responsible for controlling a gene called lacI. Plasmid- A circular, double-stranded unit of DNA that replicates within a cell .independently of the chromosomal DNA. Plasmids are most often found in bacteria and are used in recombinant DNA research to transfer genes between cells. Transformation- The alteration of a bacterial cell caused by the transfer of DNA from another bacterial cell (the plasmid). Tween- A nonionic detergent used to solubilize proteins. Dimerization- The process by which two molecules of the same chemical composition are linked together. Immunoprecipitation- The technique of precipitating an antigen out of solution using an antibody specific to that antigen. This process can be used to identify protein complexes present in cell extracts by targeting any one of the proteins believed to be in the complex.


Figure 1. Schematic of full-length myocardin protein. Regions of high amino acid divergence are labeled as well as the final five constructs used in this study. Constructs were chosen such that each region of high divergence was included twice. This was not possible for the first region. Note the leucine zipper region (L zipper), which is discussed later. brary containing over a billion different antibodies is washed over an Elisa plate containing target molecules. These target molecules are the proteins, against which antibodies are being generated. The correct antibody and its phage will bind to the target molecules while the other phages will be washed away. The DNA within this phage can then be used to produce more of the desired antibody for use in research. In this study, the biopanning technique was applied to the production of antibodies against human myocardin, a protein that has been discovered to regulate transcription in developing cardiac and smooth muscle. Although the ability of myocardin to activate smooth muscle-specific genes has been studied extensively, the protein’s effect on heart muscle genes has not yet been clearly defined. As a result, Dr. Atsma’s and Dr. de Vries’ laboratories have been working to determine the precise roles of myocardin on cardiac muscle development.

Gearing up for Production:

In order to arrive at the actual biopanning phase (the last step in this technique), target molecules must be made. In this study, the target consisted of fragments of the myocardin protein. Initially it was thought that the entire myocardin protein could be used to make the target molecules, but previous studies proved this idea unsuccessful because the protein is too large. As a result, seven constructs were created, each containing regions of the myocardin protein thought to be most important in generating an antibody specific to myocardin. In creating these constructs, the goal was to find regions of high amino acid divergence since these regions were most likely to increase antibody specificity. In the end, some of these constructs proved unsuccessful and had to be redesigned, resulting in the final five constructs shown in Figure 1. The myocardin fragments in each construct were isolated from full-length myocardin proteins using enzyme digestion and were ligated into pET-28 vectors for replication. The pET-

Figure 2. Induction of target myocardin proteins using IPTG for constructs HH1, HH2, and HH3. For each data set, the leftmost control was non-induced and the control to the right was induced with IPTG. Expected protein bands were 24.6 kDa for HH1.29.2, 32.9 kDa for HH3.25.2, and 55.4 kDa for HH2.2.2. Expected bands are estimates calculated from sequencing data of the myocardin protein. Comparing controls (black arrows) to experimental data (right arrows), correct protein induction is seen for all three constructs. Since lanes 9 (HH3) and 5 (HH2) contain proteins in the insoluble fraction, they were treated with urea before purification of His-tags. Lanes with an X represent lanes where spillage occurred. Data from these lanes was not included in this study. Data for constructs EE1 and EE2 is not shown.

Figure 3. His-tag purification of myocardin protein constructs HH1, HH2, and HH3. Lanes 2 (HH1), 2 (HH2), and 5 (HH3) are noninduced controls. His-tag purification creates a wash phase and a protein phase. As expected, all three constructs show stimulation in the protein phase. Constructs HH2 and HH3 were treated with urea prior to purification although the protein was most strongly expressed in the native state for HH3. Protein bands for HH1 and HH3 match predicted sizes. The protein for HH2 is approximately double the expected size of 55.4 kDa most likely because of dimerization at the leucine zipper region. Data for constructs EE1 and EE2 is not

28 vector contains an N-terminal His-tag and a T7 promoter for selection purposes as well as the lacI repressor, which inhibits protein production until stimulated by IPTG (isopropyl-betaD-thiogalactopyranoside). IPTG is a chemical that binds to the lacI repressor thereby inactivating it and allowing for protein production. The resulting plasmids were transformed into competent E. coli bacterial cells called DH5α to grow more copies of the plasmid. Once plasmids containing each construct were obtained, they were transformed into BL21 competent E. coli cells for further growth. Such is needed for stimulation of myocardin protein production using IPTG (see Fig. 2). In order to isolate the 41


target protein from other proteins that may have been produced, a His-tag purification was performed since only the pET-vectors containing the myocardin fragments had His-tags. Correct results were confirmed by loading the protein onto an SDS-PAGE gel and staining with Coomassie Blue stain (see Fig. 3).

Interpreting the Results:

As seen in Figure 2, two controls were used for the stimulation of myocardin protein with IPTG. The first control was a non-induced control, which was before stimulation of IPTG. As a result, this control should not contain the expected protein, as is proven in the data. The second control was an induced control, which was after stimulation of IPTG. As Figure 2 shows, this control contains the expected protein for constructs HH1, HH2, and HH3. Stimulating the target proteins with IPTG produces both a soluble and insoluble fraction. The myocardin proteins in the soluble fraction (HH1 construct and part of the HH3) can be isolated by a His-tag purification but the proteins in the insoluble fraction (HH2 construct and other part of HH3) must first be dissolved. To dissolve the insoluble fraction, a Tween solution was used followed by extraction with urea if necessary. The urea extraction was avoided because urea denatures the protein, which could cause problems with protein folding later on during the biopanning phase of antibody production. As seen in the His-tag purifications (Figure 3), proteins were expressed at approximately 24.6 kDa and 32.9 kDa for constructs HH1 and HH3 respectively, thus matching expected results. The protein for the HH2 construct, however, was approximately double (~110 kDa) the expected result of 55.4 kDa. DNA sequencing of this protein showed only a single protein insert and no mutations. From prior publications, it is known that the leucine zipper region of proteins is known to cause dimerization. This is caused by the interaction of alpha helixes with leucines along this region, which forms a hydrophobic strip thereby inducing dimerization. Since the HH2 construct contains the leucine zipper region (see L zipper region of myocardin isoform B in Fig. 1) and DNA sequencing showed no abnormalities, it has been hypothesized that the double sized band is due to dimerization at the leucine zipper region of the myocardin protein.

What The Future Holds:

Now that the target molecules have been made, the final step will, of course, be biopanning. Unfortunately, my time in the Netherlands expired before this step could be carried out so another member of the laboratory will take over. The synthetic antibodies used in biopanning will be cameloid antibodies derived from camels. Advantages of these antibodies are that, unlike traditional antibodies, they only contain two heavy chains and no light chains. This means that the antigen-binding domain of the antibody only consists of the variable domain of the heavy chain, which makes it easier for proteins to bind to an antibody during biopanning. Once the antibody has been produced, it will be used to conduct immunoprecipitation assays and to follow myocardin expression in mouse embryos. These studies are part of the larg-

42

er goal to determine the precise roles of myocardin on cardiac muscle development in the hopes of developing a cure for heart disease derived from stem cells. Perhaps, someday, the findings of such related projects could make heart disease a thing of the past, much like polio and smallpox is today.

Acknowledgements:

I would like to thank Dr. Douwe Atsma and Dr. Twan de Vries at the Leiden University Medical Center in Leiden, the Netherlands for giving me the opportunity to work in their lab this past summer. Additionally I would like to thank John van Tuyn for his support and guidance, without which this study would not have been possible.

References:

1.

2.

3.

Mack, C.P., Hinson, J.S., 2005. Regulation of smooth muscle differentiation by the myocardin family of serum response factor co-factors. J. Thrombosis and Haemostasis. van Koningsbruggen, S., de Haard, H., de Kievit P., Dirks, R.W., van Remoortere A., Groot, A.J., van Engelen, B.G.M., den Dunen, J.T., Verrips, C.T., Frants, R.R., van der Maarel, S.M. , 2003. Llama-derived phage display antibodies in the dissection of the human disease oculopharyngeal muscular dystrophy. J. Immunological Methods. 149161. van Tuyn, J., Knaan-Shanzer, S., van de Watering, M., de Graaf, M., van der Laarse, A., Schalij, M. J., van der Wall, E. E., de Vries, A.F., Atsma, D.E., 2005. Activation of cardiac and smooth muscle-specific genes in primary human cells after forced expression of human myocardin. Cardiovascular Research.


Influence of Electric Fields on the Interaction of Bubbles and Fluid Surfaces Chris Kovalchick Introduction:

Two-phase flow systems and boiling processes have been a subject of great interest over the past 20 years, particularly for application in the aerospace industry. Through the utilization of electric fields, boiling bubbles can be moved and deformed from a surface. Since boiling is one of the most efficient means of cooling, it has become a highly desirable heat transfer application due to its pertinence in microgravity and non-terrestrial environments. Numerous investigations have been dedicated to the research of electrohydrodynamic (EHD) effects, the influence of electric fields on interfacial surfaces. More often than not these experiments are carried out in terrestrial conditions, with the exception of a few scientists who have run experiments in weightless environments to obtain more application-oriented results. Dhir (1) and Herman (2) have conducted experiments in weightless environments aboard parabolic aircrafts through the sponsorship of NASA. Low-gravity investigations provide more practical insight into the eventual application of such work, mainly because the EHD effect is more pronounced in these conditions. In low-gravity situations, the buoyancy force, the greatest affecting influence on bubble behavior in terrestrial conditions, is significantly decreased. In past investigations, results proved inconclusive due to the lack of electric field strength needed to overcome the impending buoyancy force. Herman et al. (3) investigated the effect of an electric field on the coalescence properties of two bubbles, as wwell as the effect the field has on key bubble dimensions such as diameter, shape, and aspect ratio. In the current study presented here, the coalescence of bubbles at the surface of a stagnant, isothermal liquid is investigated. The objective is to determine if the application of an electric field across the surface of the fluid has a pronounced effect on bubble behavior at this surface. Bubbles are injected into the liquid contained in a vertical test column, with two electrodes attached to the column at the fluid surface level. The working fluids for this study are water and Shell Ondina 917 oil. It is proposed that the behavior will differ between water and oil under the absence of an electric field. When an electric field is generated, the behavior of the bubbles will differ with respect to that observed in the absence of an electric field for each individual fluid.

Physical Background:

As a bubble rises in fluid under the influence of an applied electric field in terrestrial conditions, several forces act on the bubble. The application of an electric field is designed to be used as a substitute for the buoyancy forces, which are needed to equalize the gravitational force. An electric field can be used in this way because both

Bubbles, Bubbles, Bubbles What’s so fascinating about bubbles? Is it the precise spherical shape or perhaps the fragility of its microscopically thin soap film? In either case, bubbles are nature’s way of enclosing certain volumes in optimal shapes. The liquid film of the bubble is stretchy and conforms to the structure with the least amount of surface tension. This is accomplished by enclosing the gas within the bubble with the least possible surface area, which in most cases is a simple sphere. However, when multiple bubbles coalesce, the group of bubbles also will try to reduce the total surface area of the structure by merging to share a common wall. Bubbles are formed easily in a solution of soap and water. Because the greasy, hydrophobic ends of the soap molecule do not want to be in contact with the water, they find their way to the surface and push their nonpolar ends to the exterior. As a result, the soap film is protected from evaporation because grease does not evaporate. This substantially prolongs the life of a bubble, since one of the most common ways that a bubble pops is by evaporation of its water content. Bubbles may also pop as a result of air turbulence or contact with a dry surface. --Divya Sambandan electric field and gravitational forces involve no physical contact. The coalescence of bubbles in a stagnant, isothermal liquid is dependent upon a balance of inertial and viscous forces, surface tension, buoyancy forces, and other body forces imposed by an applied electric or sound field. A significant amount of research has been devoted to the derivation of a formal equation for the electric body force acting on a fluid. The generally accepted form, generated by Grassi and Di Marco (4) as well as Landau and Lifsitz (5) is given as  1  ∂e   1 Fe = r f E − E 2 ∇e +  r E 2    2 2  ∂r  T 

Eq. 1

where is the electric field intensity, ε the electrical permittivity of the medium, the free charge density and the mass density. The first term in this relationship deals with the electric force exerted on the bubble resulting from the existence of free charges. This term is considered negligible for this particular experiment. The second term results from the forces present at the bubble interface due to spatial variation in permittivity, while the third term deals with the effects resulting from a non-uniform electric field. Only the first term de43


pends on the sign of the electric field, as can be seen in the above equation. The remaining two terms exhibit dependence on the square of the gradient of the electric field, thus remaining independent of the field polarity. The characteristic phenomenon used to compare data in this investigation is the Weber number. The Weber number We is a ratio of inertial and viscous forces to surface tension forces. It has been proposed by Doubliez (6) and Duineveld (7) that if the Weber number calculated from the approach velocity is above a critical value, which is dependent upon the bubble size and initial inter-site distance before contact with the surface, the bubble will bounce rather than coalesce. This theory is generally accepted with water as the working fluid. This investigation will also test this theory with oil as the working fluid.

About the Test Apparatus: Figure 1: Test apparatus a)

b)

The setup for this investigation consists of a test cell, affixed with two electrodes and filled with a liquid into which air bubbles are generated, along with image processing and data acquisition equipment needed to run the experiment (see Fig. 1). This equipment includes a high speed camera attached to a television monitor and personal computer (PC) used to record and analyze bubble behavior at the surface of the liquid, a high-voltage power supply for generating an electric field, and pressure regulation equipment for producing and monitoring the air pressure used to create bubbles in the test cell. The electric field generated for this investigation is designed to be a uniform field. The meniscus of the stagnant fluid in the test cell remains stable in the absence of an electric field. However, when the voltage is applied at the surface, the meniscus becomes slightly more concave and more active due to the currents generated across the surface. This creates what could be analagous to a non-uniform electric field. The difference in surface between an uncharged state and that of applied voltage can be seen above (see Fig. 2). Although the surface is not level, implying a non-uniform field in theory, the degree of non-uniformity is not severe enough to drastically alter the results. Therefore, the field is taken to be uniform for this investigation. The voltage, with a DC current, is produced by a Miles Hivolt Ltd. model AX351 generator. Voltage application is controlled by a C++ program written and compiled at the Institute for Process Engineering, Hannover. The program is run through a PC with a safety shut-down feature. All tests are run at 35 kV, with the presumption that any noticeable differences in results for both fluids can be seen at the highest voltage level. The program allows for the voltage to be increased in 7 kV increments.

The Shocking Results:

Figure 2: Fluid surface for oil with a) no voltage b) applied voltage.

44

In For each set of conditions, a sufficient number of experimental trials are conducted in order to see a concrete trend for as large a variance in significant dimensions as possible. With the given setup, only the size of the injected bubble can be controlled. The velocity can not be controlled, and is dependent on both the diameter of the bubble and the working fluid in which it is rising. The velocity remains relatively the same for all bubbles created in a given fluid. Trials are conducted both with and without the application of an electric field.


Weber Number We=ρlu2DB/σ

2.4 2.0 1.6 1.2 0.8

air-water without voltage coalescence

0.4 0.0

1

2

3

4

5

mm 6

BubbleDiameter DB

Weber Number We=ρlu2DB/σ

For both cases with water as the working fluid, all bubbles coalesce for all cases, both with and without the application of voltage. Due to the relatively low viscosity of water, the spherical shape of the bubble is not held constant throughout the rise in the column, creating an unstable surface. These fluctuations in dimension account for the lack of a pronounced difference between both cases. Additionally, when voltage is applied, the electric field force is not strong enough to overcome buoyancy forces. Therefore, the application of voltage has no influence on the coalescence and bouncing of bubbles in water. Because the results here show only coalescence (see Fig. 3), there exists no critical Weber number for this investigation. This can be attributed to the restriction on the size of the test column used for these experiments. In agreement with past investigations, the detachment of bubbles could possibly be seen for larger diameters, if such experiments were conducted with a larger test cell. For bubble rise in oil with no applied electric field, bubbles bounce for smaller diameters and coalesce for larger diameters, as seen in (see Fig. 4). A critical Weber number in the vicinity of 0.4-0.6 can be seen from this graph. The bubble behavior above and below this critical value is the opposite of those trends observed in past experiments with water as the working fluid. This can be attributed to the variance in surface tension and viscosity between oil and water. In water, the film between the surface and bubble right before surface contact drains very easily, due to a low viscosity. However, in oil, a higher viscosity prevents this from occurring. When an electric field is applied, the critical Weber number observed in the previous case (absence of electric field) becomes annulled, as seen in (see Fig. 5). It is clear that the application of a voltage alters all effects, making them entirely independent of one another. Both coalescence and bouncing are observed for all cases, independent of diameter, velocity, and Weber number. For both cases with oil as the working fluid, the Weber number is shown to be a better criterion for observing the coalescence and bouncing behavior of bubbles. Figure 6 and Figure 7 show the graphs for both cases with the bubble diameter plotted against the approach velocity. Since the velocity remains nearly uniform for all cases, approach velocity is not a conclusive way to analyze bubble behavior. The other previously referenced plots, with bubble diameter plotted against the Weber number, show a distinctly-defined linear relationship. The case with an applied electric field shows a linear relationship with a greater deviation from the best-fit linear approximation. In order to investigate the effects of an electric field on the coalescence behavior of a fully-developed air bubble at the surface of an isothermal, stagnant liquid, experiments are carried out under terrestrial conditions with water and Shell Ondina 917 oil as the working fluids. Bubble behavior is studied in the two fluids both with and without the application of an electric field. In water, there exists no difference between bubble behavior for experiments conducted with and without electric field application. This can be attributed to the lack of electric field strength needed to overcome the buoyancy forces, as well as an overbearing fluctuation in bubble dimension throughout the rising process, due to low viscosity. Unless a stronger field is applied, no pronounced effect can be visualized. In oil, contrasting effects are observed with regard to those observed in water. The coalescence and bouncing behavior of the

1.8 - air-oil 1.6 without voltage coalescence 1.4 bouncing 1.2 1.0 0.8 0.6 0.4 0.2 0.0

1

2

3

4

5

6 mm 7

BubbleDiameter DB Figure 3: Weber number as a function of bubble diameter, no voltage, air-water Figure 4: Weber number as a function of bubble diameter, no voltage, air-oil bubbles is the reverse from the trends seen with water under the absence of an electric field. The critical Weber number that exists with water with regard to coalescence and bouncing threshold exists in this state for oil. When an electric field is imposed over the fluid surface, the results become scattered, and a critical value for the Weber number fails to exist. Both with this investigation and past investigations, it is difficult to create an electric field with sufficient strength to visualize any pronounced effects between the electric field and non-electric field applied states, specifically for the experiments with water as the working fluid. The results contained here within for oil as the working fluid show a different behavior for the bubbles with the application of an electric field, which warrants further investigation in the future. Although an effect is shown to exist, it is still not known why the presence of an electric field nullifies the existence of a critical value for the Weber number, or to what extent different levels of voltage application af 45


voltage application affect this phenomenon. More time should also be spent investigating the level of non-uniformity of the electric field, and how this can be monitored more carefully.

1.8 Weber Number We=ρlu2DB/σ

air-oil 1.6 withvoltage coalescence 1.4 bouncing 1.2

A Final Charge:

1.0 0.8 0.6 0.4 0.2 0.0

1

2

3

4

5

6

7

BubbleDiameter DB 110 mm/s 100

ApproachVelocityu

90

Key Terms:

80 70 60 50

air-oil without voltage coalescence bouncing

40 30 20

1

2

3

4

5

6 mm 7

BubbleDiameter DB

110 mm/s 100

ApproachVelocityu

90 80 70 60 air-oil withvoltage coalescence bouncing

40 30 1

2

3

4

5

6

mm 7

BubbleDiameter DB

Figure 5: Weber number as a function of bubble diameter, voltage, air-oil Figure 6: Approach velocity as a function of bubble diameter, no voltage, air-oil Figure 7: Approach velocity as a function of bubble diameter, voltage, air-oil

46

[1] Electrohydrodynamics- the study of the influence of electric fields on interfacial surfaces. [2] Electrical Permittivity- the ability of a material to store a charge from an electrical field without conducting electricity. [3] Weber number- dimensionless coefficient that relates inertial and viscous forces to surface tension forces. [4] Dielectrophoresis- the translational motion of neutral matter caused by polarization effects in a non-uniform electric field. [5] Terrestrial conditions- state when gravity acts as a normal force of 9.81 m/s2 (eg. on the surface of the earth). [6] Electric field- an effect produced by an electric charge that exerts a force on charged objects in its vicinity. [7] Coalescence- in this investigation, phenomena of a bubble coming into contact with a fluid surface and breaking, thus meshing with the fluid surface. [8] Detachment- in this investigation, phenomena of a bubble coming into contact with a fluid surface and bouncing off, remaining independent of the fluid surface.

Acknowledgments:

50

20

With the continuous and rapid progression of aerospace technology, it is becoming increasingly important to develop a comprehensive understanding of all phenomena associated with these applications. While the investigation presented here is by no means an absolute analysis of the applications due to the terrestrial environment it is conducted in, it does provide significant insight into results that can be obtained in low-gravity conditions. In past investigations conducted in terrestrial conditions, the main handicap for usable data was a lack of electric field strength. However, this investigation shows that it is indeed possible to create an electric field of sufficient strength to effectively obtain data. As this technique is continually improved, both in terrestrial and low-gravity conditions, we will be able to further understand the properties of electrohydrodynamics and apply them in a way that will benefit the aerospace industry for years to come.

The experimental investigations in this project were conducted at the Institute for Process Engineering, University of Hannover, Germany, under the direction of Prof. Dr.-Ing. D. Mewes in 2004. The study was made possible through collaboration with Prof. C. Herman, the Johns Hopkins University, Baltimore, Maryland, U.S.A. Specific project assistance was provided by Dipl.-Ing. Lars Reime and Dipl.-Ing. Dierk Wiemann. Funding for the project was made possible through the Vredenburg Fellowship, sponsored by the Whiting School of Engineering at the Johns Hopkins University.

References: 1.

‘Tiny bubbles create better cooling,’ 31 August 2003, Nasa Space Research, http://spaceresearch.nasa.gov/general_info/tinybubbles. html.


1. 2. 3. 4. 5. 6.

P. Sneidermann, 15 November 1999, ‘Taming Bubbles in Zero Gravity,’ The John Hopkins Gazette. C. Herman, D. Mewes, Z. Liu, J. Radke (2004) Bubble detachment and coalescence under the influence of electric fields. Proceedings of the ASME-ZSIS International Thermal Science Seminar II, Bled, Slovenia. P. Di Marco, W. Grassi, G. Memoli, T. Takamasa, A. Tomiyama, S. Hosokawa (2003) Influence of electric field on single gas-bubble growth and detachment in microgravity. International Journal of Multiphase Flow, 29: 559-578. L.D. Landau and E.M. Lifsitz (1986) Electrodynamics of Continuous Media, second ed., Pergamon, New York, 68-69. L. Doubliez (1991) The drainage and rupture of a non forming liquid film formed upon bubble impact with a free surface. International Journal of Multiphase Flow 17: 783-803. P.C. Duineveld (1994) Bouncing and coalescence phenomena of two bubbles in water. Bubble Dynamics and Interface Phenomena: 447456.

The Future of Integrated Circuits Laura Doyle Introduction:

As demands for more speed in electronic devices become more and more prevalent, the devices must be modified not only at the software level but at the hardware level to keep improving performance. For the fastest performance speeds, integrated circuits (ICs) should be connected to the central board that holds all the components by the shortest distance possible. Currently, this is done with flip-chip bonding, where the back side of the chip is connected directly to the top of the board with tiny solder beads for interconnects between the two structures, providing efficient electrical connections for input to and output from the IC. Research already underway at Intel suggests that the future will see faster computers using optical interconnects (1). Signals transmitted along beams of light, using photons instead of electrons, do not deteriorate as quickly as those passing electrons across copper wires. This makes optical signals far superior for very high bandwidth applications. The Gigascale Interconnect Group at Georgia Tech is working on developing such optical chip-to-board interconnects.

Building the Ideal Interconnect:

Norbornene-based polymers make good optical input-output interconnects due to their innate structural properties. These polymers are highly viscous, almost clear, and liquid at room temperature, but can be heated or “baked” until the solvent evaporates away and a solid is left behind. The substance also has a low dielectric constant to limit disruption from the signals of other nearby interconnects or “crosstalk”. This dielectric constant is between 2.5 and 3.0, comparable with paper and wax and much lower than common dielectrics such as aluminum oxide, which has a constant of 9.1 (2). A high index of refraction is essential for efficient optical signal conduction. The index here ranges from 1.55 to 1.6, about equal to that of glass (4). A low elastic modulus of about 600-700 MPa, which is lower than even rubber, compensates for thermal expansion coefficient mismatch between board and die, an important role for an interconnect (4). Additionally, the material may be photodefined to form various structures by using a mask to block exposure of specific regions. This allows for easy fabrication since only the exposed regions solidify while the rest is washed away in the development phase. If the proper shape is chosen for the optical 47


Putting it to the Test:

Figure 1. Result showing good perpendicularity and clarity.

Figure 2. Result showing undesirable “buttress” structures.

The experiment described here tested the structural development of three polymer formulas produced by Promerus, LLC. Polymer pillar processing began with a 36μm film of polymer spun onto the silicon wafer as the starting material. A mask was used to expose selected sections of the polymer-coated wafer to UV light. Since these formulas are negative tone, the portions of the polymer exposed to UV light were those desired to remain when the undeveloped portions were discarded. Two different doses of UV radiation were used as test conditions. Finally, the undesired polymer was removed from the areas that had not been exposed to UV by agitation in a beaker with a special chemical developer. Isopropyl alcohol was used to neutralize the developer and complete the fabrication process. What remained of each sample was a regular rectangular array of vertical interconnect pillars of various different cross-sectional patterns like doughnuts, bull’s-eyes, or concentric squares. In order to obtain photographs of the samples on a scanning electron microscope (SEM), the samples had to be cut into one by two centimeter rectangles, mounted perpendicularly, and coated with gold. It was desirable that cleavage planes bisect structures with interior cavities for the purpose of demonstrating the depth and clarity of development. To determine whether pillar-type waveguides are possible with that material, 130μm pillars were fabricated and evaluated using SEM imaging. Well defined, vertical pillars at this height guarantee a material’s effectiveness for almost all applications. Copper-filled coaxial pillars were also fabricated. A silicon wafer was layered with first copper, then silicon dioxide. The silicon dioxide was then etched away to leave exposed circles of copper to serve as the basis for electrical connections in the interior of the pillars. Polymer was spun onto the wafer and patterned to form hollow-core pillars centered above the copper circles. A copper core was then grown from these circles using electroplating or electrodeposition. The pillars were bonded by hand to a copperplated wafer using a small amount of solder to simulate flip-chip bonding.

What Makes a Good Interconnect? Figure 3 (Bottom). Poorly developed result.

interconnects, they can be filled with a copper core to simultaneously transmit electrical signals and serve as a combined electricaloptical interconnect. An effective electrical-optical interconnect should have a cleanly developed interior to allow for solid electrical connection and smooth exterior surfaces to reduce optical losses. Such development characteristics vary among different polymer formulas. For best results, polymer formulas should undergo preliminary testing to determine which formula is best for which application. Here high aspect ratio polymer pillar-like structures are used to provide a physical optical path or waveguide between the chip and the board, while metal-core polymer pillar-like structures are used to provide electrical interconnection.

48

The criteria used to evaluate the development of each formula were side smoothness and perpendicularity to ensure good waveguide properties, top flatness for a solid connection with the other surface to be bonded, and interior clarity to establish a solid electrical connection during the copper filling process. Evaluation was based on a standardized rubric describing the characteristics associated with each numerical score in each category. The best samples showed good depth of development and perpendicularity, but sidewalls had extensive vertical ridging which may interfere with optical signal transmission through refraction (see Fig. 1). Less superior samples had smooth sides, but interior cavities were often occupied by “buttress” filaments spanning the gap between walls (see Fig. 2). A few samples also exhibited structural warping not attributed to development or cleavage artifacts. The worst samples exhibited poor perpendicularity and interior clarity (see Fig. 3). The tall waveguide pillars had excellent perpendicularity. Side-


walls showed smooth exterior surfaces, but top flatness seemed to be reduced with increased height. The copper-core polymer pillars were successfully bonded, from visual inspection, to a copper surface using solder.

Taking the Next Step:

Polynorbornenes like the polymer formulas used here are becoming an important material in microsystems packaging owing to their physical properties. Plans are already underway for incorporating polymer interconnects for rapid optical signal transduction in chips by Intel and Fujitsu. The future may see even faster chips by adding another dimension to the electrical-optical interconnect as a channel for coolant. With such a cooling system integrated directly into the chip, processors speeds will skyrocket. While many researchers will appreciate the ability to perform calculations faster, the typical user will best appreciate this improvement for enabling the most impressive computer-generated graphics ever. But the applications of this polymer extend even further than enhancing computational performance. Research has already shown that the polymer can form reliable microchannels for conducting tiny amounts of fluids, as in liquid crystal displays. Due to the polymer’s flexibility, many hope that this material can be used to create revolutionary new flexible LCD displays. An amazing convenience, this would allow computer users to carry a 22inch monitor rolled up in their pocket. Combined with the higher processing power polymer interconnects have the potential to produce, the next decades will see the advent of the most powerful and portable computers the market has ever seen.

3. 4.

Structure-property relationships, Journal of Applied Polymer Science, 91: 3023-3030. L. Singh (2004) Effect of Nanoscale Confinement on the Physical Properties of Polymer Thin Films, PhD thesis, Georgia Institute of Technology, 44. M. Bakir, et al (2003) Sea of Polymer Pillars: Compliant Wafer-Level Electrical-Optical Chip I/O Interconnections, IEEE Photonics Technology Letters, 15: 1567-1569.

Key Terms:

Integrated circuit/IC/chip – small electronic circuit fabricated on a thin semiconductor substrate; such circuits use mainly semiconductor components like transistors Interconnect – a connection to conduct signals between two circuit elements Optical – using light; electronics applications most often use pulses of infrared light Polymer – a large molecule created by the joining of many identical subunits; that referred to here is a viscous liquid that can be heated and manipulated to form a solid substrate Waveguide – a structure to direct the propagation of a wave, such as an optical fiber

Acknowledgements:

Thanks to Dr. Kohl, Ate He, Dr. Bakir, and Dr. Meindl of Georgial Tech and Ed Elce from Promerus, LLC for their knowledge and guidance in this research. Thanks also to the National Science Foundation and the National Nanotechnology Interconnect Network for the funding to make this research possible.

References:

1. 2.

Intel Corporation (2006), Optical Interconnects for Chips Possible, Technology & Research, Intel Corporation. Accessed Jan 2006. <http://www.intel.com/technology/techresearch/research/rs03044.htm> Y. Bai, et al (2004) Photosensitive polynorbornene based dielectric. I.

49


Nanometer-Thick Dielectric Films Deposited by Electron Cyclotron Resonance (ECR) † ‡ ‡ Matthew J. Smith , Ling Xie , Erli Chen Abstract:

Depositing ultra-thin high quality dielectric films to be compatible with nanoscale devices is a challenge. The purposes of this study are: (i) to investigate the deposition of nanometer thick silicon nitride films using ECR-PECVD; and (ii) to characterize these films. X-ray photoelectron spectroscopy (XPS) and spectroscopic ellipsometry were used to analyze the compositions and determine the thicknesses and indexes of silicon nitride films. The results show a strong relationship between the refraction index and compositions for the films with a thickness of 50 nm. Important trends are also identified concerning index of refraction and deposition time for ultra-thin films. Mathematical formulae that use XPS peak intensity ratios to estimate the thickness of ultra – thin films is modified for ultra-thin silicon nitride films. The preliminary results show that the thickness of these silicon nitride films can be estimated using XPS peak intensity ratios.

Introduction:

The purpose of this project is to further our understanding of silicon nitride films fabricated using ECR-PECVD. We are interested in depositing ultra-thin (less than 10 nm) silicon nitride films using ECR-PECVD because this process allows for very low deposition temperatures; ECR-PECVD has a process temperature below 150°C. In this project, we studied the effects of SiH4 flow rates on the composition of films with a thickness of 50nm; the variation of index of refraction with deposition time; and estimation of the thickness of ultra-thin silicon nitride films using XPS.

Experiment:

The composition of nitride films formed by ECR-PECVD is controlled by several factors, primarily the gas flow rate into the chamber. For silicon nitride deposition, SiH4 and N2 react and produce silicon nitride (Si3N4) and hydrogen gas. The nitrogen to silicon concentration ratio was measured using x-ray photoelectron spectroscopy, and then analyzed. A VASE32 Variable Angle Spectroscopic Ellipsometer was used for this. To determine the percent composition of a given element, the peak intensity of a peak is divided by an orbital and element specific sensitivity factor, which can be used to derive percent composition. XPS analysis was conducted using a Science SSX-100 X-ray Photoelectron Spectroscopy. For a stoichiometric nitride film, the concentration ratio of N/Si should be1.33 (4 nitrogen for every 3 silicon) and the refractive index is about 2.0 at a wavelength of 630 nm, which can be measured using the spectroscopic ellipsometer (SE). The index of refraction was also analyzed vs. deposition time for thick and ultra-thin films. The substrates used for ultra-thin deposition were prepared with HF-etch to minimize native silicon oxide. The data collected using the ellip†Material Science and Engineering, Johns Hopkins University Center for Nanoscale Systems at Harvard University, 9 Oxford St, Cambridge, MA ‡ CNS, Harvard University 50

What is a Dielectric? Nanofabrication brings engineering to the atomic level. The ability to create such incredibly small objects opens up a variety of new opportunities to change and slowly improve the way we live. One of the many initiatives now being explored is the manufacture of a structure that holds molecules whose electrical conducting properties change in the presence of substances, such as toxins. This is significant because a substance can be engineered to detect and respond to desired signals. However, the ability to deposit ultra-thin dielectric films in the range of a nanometer is vital to the creation of such structures. Silicon nitride is commonly used as a barrier material in the composition of these films. Because of its dense structure, oxygen diffuses slowly through the deposited nitride and prevents oxidation of underlying silicon. Nevertheless, the properties of the dielectric film can be manipulated by varying the hydrogen composition of the silicon nitride sheet. Thus, deposition of nanometer-scaled dielectric films is fundamental to the development of the field of nanofabrication. --Divya Sambandan someter was fitted to a 2-layer model to account for the native oxide present at the interface between substrates and films. Several oxide thicknesses were tested with the model, and the model with the lowest MSE was chosen for each series of films. The ultra-thin films ranged from 3-8 nm in thickness, and are thin enough that photon electrons from the silicon substrate can escape through the thin film and appear as a Si-Si signal on the XPS measurements. The thinner the film, the larger the number of photon electrons that can escape from the substrate, and thus a larger SiSi peak appears at 98.9 eV, separate from the Si-N peak at 102.0 eV. The peak at 103.0 eV is generated by the photoelectrons in the silicon oxide layer and the peak at 98.9 eV by the silicon substrate. The intensity ratio of these two peaks depends on the oxide film thickness. Both of these peaks correspond to the Silicon 2p orbital. We explored using the area ratio of these peaks to calculate


the thickness of the film. This was done previously for silicon oxide thin films [3], but has not been applied to silicon nitride films.

Results and Discussion:

The XPS and SE measurements of the films deposited with different flow rates indicated that the films deposited at higher silane flow rates (above 55-60 sccm) are silicon rich and have an index above 2.0, while those deposited at lower flow rates are indeed nitrogen rich and have an index below 2.0. Our data agrees with the accepted indexcomposition ratio correlation (Figure 1); a film with a N/Si ratio of 1.33 should have an index of 2.0. When evaluating the index of refraction change with deposition time, we found that the thicker films had a constant index of refraction when the deposition time was varied from 400 to 600 seconds. The index of refraction of ultra-thin films, however, did not remain constant with deposition time, as the time was varied from 10 Figure 1. Result showing good perpendicularity and clarity. to 60 seconds. (Figure 2) This implies that a unique combination of deposition time and flow rate is required to obtain a particular thickness and index of refraction when depositing ultra-thin Si3N4 films with ECR-PECVD. When using XPS data to estimate ultra-thin film thickness, the inelastic-mean-free-path and the silicon concentration ratio of the film to the silicon substrate per unit volume, had to be approximated. We found that our best approximations for these constants did not return reasonable values for estimated film thickness. A scaling constant was introduced into the equation and this technique was able to return thicknesses that paralleled those returned by our best spectroscopic ellipsometry models. (Figure 3) Scaling constants seemed dependant on Silicon Nitride flow rate, and varied from 0.1 to 0.2. The significance of these values has yet to be determined.

Future Work:

In the future, we hope to refine the process of using XPS to mea- Figure 2. Result showing undesirable “buttress� structures. sure silicon nitride film thicknesses of less than 10 nm. An alternate method of knowing film thickness is necessary, so that we have more accurate data to model our formulae after.

Acknowledgements:

I would like to thank Dr. Ling Xie, Dr. Erli Chen, David Lange, the CNS Staff at Harvard University, the National Nanotechnology Infrastructure Network and the NSF.

References:

1.

2.

3.

Xie L, Deng J, Shepard S, Tsakirgis J, Chen E, Low-Temperature Deposition of High-Quality, Nanometer-Thick Silicon Nitride Film in Electron Cyclotron Resonance (ECR) Plasma-Enhanced CVD System, presented at Nanotech 2005, Anaheim CA, May 2005. Flewitt A.J., Dyson A.P., Robertson J, Milne W.I., Low Temperature growth of silicon nitride by electron cyclotron resonance plasma enhanced chemical vapour deposition, Thin Solid Films 383 (2001) 172- 177. Xie L., Zhao Y., White M.H., Interfacial oxide determination and chemical/electrical structures of HfO2/SiOx/Si gate dielectrics, SolidState Electronics 48 (2004) 2071-2077.

Figure 3. After introducing a scaling constant, the XPS data can be used to predict thicknesses that parallel the thicknesses measured using ellipsometry.

51


Outsourcing American Jobs? Addressing Economic Globalization Amanda Leese

“The United States, for a very long time, thought it could get by by being pretty good at everything it does. An athlete with great talent who doesn’t train is ultimately going to be caught by somebody who has less talent but better training. We don’t get to make the globalization choice. We do get to make the choice about whether or not we change.” Catherine Mann Institute for International Economics Time, November 2004 A20

Introduction:

As the phenomenon of outsourcing continues to shape the domestic economy, and politicians and economists scramble to present a definitive economic program, Americans of all walks of life are left to live the reality created by economic globalization. The solution to the issue of “shifting of jobs into lower wage countries” may be one that maintains a positive relationship between the long term benefits for both corporation and country and the lowering of short term costs to labor, as one sought by economists and consultants around the globe. Polarized politics and a plethora of conflicting interest groups offer arguments which support or oppose further globalization. The depths of this debate makes it difficult to determine a fact-based consensus on what, exactly, is best for the American people. There are two truths that must be explored when evaluating the merits and demerits of outsourcing: first, free trade has an indisputably positive effect on the economy both at home and abroad; and, second, due to globalization of domestic industries by virtue of international trade, outsourcing is inevitable. The quandary, therefore, is not over whether to accept or reject this economic movement, but rather over the nature of its implementation. The problem with outsourcing today is that the American political framework lacks the social programs to curb the sting of short term job loss; therefore, the country becomes consumed with what is immediate and cannot appreciate the potential long term benefits.

Free Trade as an Asset:

International trade, the motor of globalization, encourages the exploitation of increasing returns to scale, a production situation where output grows proportionately more than the increase in inputs or factors of production. This situation occurs because international trade allows for specialization. To fully exploit these increasing returns to scale, international trade has, in recent years, increased dramatically and the purchase of cheaper production abroad has become a business norm, which introduces the economic tool of outsourcing. This new stage of global economics is referred to as international economies of scale. The two main ways in which the trade of these economies of scale affects social welfare are modeled by the following equation… 52

1(p - MCx ) ^X = (p - ACx) ^X – X [^ACx/^X] ^X Social Welfare is defined by the dependent, left-hand part of the equation. The difference between p (price) and MCx (the marginal cost of the commodity X), multiplied by the quantity produced of the commodity X represents, more or less, the profit experienced by the producer or the gains of a scale economy. As demonstrated in the above equation, this social welfare, the gains from trade accrued to society, is determined by two factors: (p - ACx ) ^X, the profit effect and X [^ACx/^X] ^X, the decreasing-average-cost effect. The first, the profit effect, is a result in the “lowering of the mark-up on a firm’s output,”2 though the output is held constant. As noted by Markusen, “if price exceeds average cost then an increase in output generates a surplus of price over average cost on the additional output. The gains from specialization may be very unevenly distributed.”3 In particular, these gains, while universally acknowledged, are not shared by those who sacrificed for the globalization of the market. As mentioned before, the second term in the social welfare equation, X [^ACx/^X] ^X, is known as the decreasing-average-cost effect. Since the equation assumes increasing returns to scale, the average cost with respect to output or the quantity (^ACx/^X) is negative; therefore, as more of the commodity is produced, X increases and the initial output requires fewer resources. On the whole, the equation demonstrates that a scale economy, where trade is free and corporations may take advantage of cheaper labor abroad, allows for an increase in production and yields better social welfare. Furthermore, an increase in production is natural in a scale economy, due to the increase from a domestic market to a global one, resulting in an increase in elasticity of demand. As trade restrictions dissolve, doors are opened to wider markets and naturally increase the demand for goods. This increased demand represents the possible gains in profit. Additional benefits of trade that coincide with increased production and lower costs and prices include increased product diversity and specialized plants and inputs, both of which improve the quality of life for the consumer. These increased varieties and lower costs of products reflect another trade situation inherent within economies of scale: intra-trade industries. Although the hot debate of the past several years has centered on American IT firms cutting American jobs and outsourcing skilled jobs to India,


the door swings both ways. “South Carolina’s huge BMW plant represents the ‘off-shoring’ of German jobs because that country’s labor laws, among other things, give America a comparative advantage… Similarly, Mercedes are made in Alabama, Hondas in Ohio, and Toyotas in California.”4 Intra-trading has allowed industries, such as the automotive industry, to produce both larger quantities and greater variety.

The Inevitability of Outsourcing:

Outsourcing is a means through which a country may capitalize on trade relations and its own natural comparative advantage by purchasing one of the factors of production (in this case, labor) at a cheaper price. Though most interpret outsourcing to refer to internationalization, inter-industry outsourcing is a common practice as well. This type of national outsourcing is also responsible for job loss within the industry, though domestic jobs are being created. National outsourcing is, therefore, also responsible for the negative sentiment felt by the average worker toward the philosophy of outsourcing, but is not as big a factor in aggregate job loss. The market for international outsourcing has grown exponentially over the past few years and is expected to expand even further. As of first quarter 2004 US firms were spending $100 billion on outsourcing in order to cut their costs 10-15%, which allows them to remain competitive.5 The provided graph (Figure 1.) explores the potential for outsourcing and the subsequent off-shoring of wages. With the benefits of international trade and the inevitability of outsourcing established, the success of the system as whole comes to determining whether gains outweigh the costs. This balance is a difficult one to strike because actual statistics on jobs lost specifically to outsourcing are new and subject to the biased perspectives of both former employer and claimant. In fact, regardless of the facts filed by claimants of either pro- or anti- outsourcing camps, the information collected on a national scale and presented by the Bureau of Labor Statistics of the US Department of Labor “was added to the Mass Layoff Statistics program in January 2004.”7 Reliable aggregate data accessible to the public, therefore, reaches back only through the fourth quarter of 2003. Opponents of outsourcing argue that trade restrictions would benefit the domestic economy. This argument falls short according to a number of criteria. First of all, outsourcing isn’t the only reason for increased unemployment. Economic recession is largely responsible for the shortage in American jobs. In Measuring Temporary Labor Outsourcing In U.S. Manufacturing, the authors write that the growing use of temp workers by manufacturing -- about 890,000 now of a total above 19 million -- partially explains the flatness in the level of manufacturing employment in the 1990s (as recorded by the Bureau of Labor Statistics) despite the substantial increases in output of American manufacturers. Manufacturers added only 550,000 jobs between 1992 and 1997, even though the economy was thriving. In the last 10 years or so, employment in the U.S. “temporary help supply” industry has more than tripled. Looking at a longer period, the number of temps has risen at an annual rate of 11 percent a year.11 Secondly, there are a number of natural inhibitors to the exploitation of outsourcing; an “over-globalization” of an industry is as equally unattractive to the employer as it is to the employee. Indus-

try leaders will naturally limit their overseas outsourcing, namely, because supplier proliferation would drive [up?] of prices of that factor production. In the long run, the international market would become saturated enough that diminishing returns to scale (where inputs yield an ever decreasing proportional output) would halt further outsourcing. Essentially, the international labor demand would drive up the wage so that these overseas options would be less or no longer appealing to the industry. Granted, this effect refers to the long run. In the short run, it behooves industries not to limit their outsourcing as a means of maintaining their market base and limiting market differentiation. To the same extent that globalization provides the consumer with more options, the available market for a specific commodity within an industry diminishes. Also, where outsourcing would render new technology could become prey to competing firms, or undermine the competency of the work-base, production is best-kept within the industry. Lastly, an open industry-employee relationship often generates greater worker productivity. In this regard, the industry does consider the average worker. Gail Leese, Director of New Business Initiatives and Worldwide parts Services for John Deere, touched on the above issues in an informal, phone conversation on October 30, 2004, during which she pointed out the dangers of outsourcing regulations which work beyond those natural limitations, “When we try to superimpose political controls to overcome economic drivers in the short-run, we invariably lose.”12 Leese addressed the hypothetical trade restrictions in the scenario of audio/video production. She explained the emergence of social concern for domestic job loss within that industry, which occurred 20 years ago. Restrictions were avoided and, now, the industry has seen not only the quality of video and audio technology grow at an almost unprecedented rate, but also the replacement of old jobs by new domestic employment, when the industry expanded to incorporate CDs, DVDs, MP3s and other technology. Can not the experience of this industry apply to today’s globalization concerns about IT industries?

Figure 1.6

53


porting business, international trade and outsourcing; the social support, however, lags. The chairman of Bush’s Council of Economic Advisors, Gregory Mankiw, argues that “The movement of American factory jobs and white-collar work to other countries is part of a positive transformation that will enrich the U.S. economy over time, even if it causes short-term pain and dislocation.”13 Mankiw, as of July 15th, continues to profess that “Free trade is good for America. The truth is when a good or service is produced at lower cost in another country; it makes sense to import it rather than to produce it domestically… outsourcing is probably a plus for the economy in the long run.”14 Though respected as brilliant economic theory, this vote for outsourcing still does not reconcile the fact that, by only the end of 2005, 2.2 million jobs have been lost in duration of this administration and job creation expectations are continually disappointed.15 A Feb 10th article in the Seattle Times, LA Times, and Chicago Tribune presents the sobering state of the unbalance between social support and support for outsourcing: • Last year’s Economic Report of the President predicted that 1.7 million jobs would be created in 2003. Instead, the nation lost 53,000 jobs • Over the past three years, 2.2 million jobs have disap- peared • Since the Great Depression, it has never taken this long for the economy to begin creating jobs after emerging from a recession. After the last recession ended in 1991, it took 14 months for employment to begin expanding. Current problems with the economy have gone on nearly twice as long, 26 months. • January 2004 saw the creation of a mere 112,000 jobs, leading most economists to believe the 2004 expectations to be aggressive. Economists had expected 150,000 new jobs in Friday’s Labor Department report for January. Most have said the economy should be creating 200,000 to 300,000 jobs a month to sustain wthe recovery.16

Figure 28: Total Mass Layoffs, All Industries Figure 39: Non-Farm Business Productivity Figure 410: Manufacturing Productivity

The Social Response and Politics of Outsourcing:

What, then, is to be done about short-term job loss? After considering the gains from free trade and the inevitability of outsourcing, it becomes clear that short-term problems arise when jobs are not created to replace those that are lost. Ideally, there would be a balance between the political support for free trade business expansion by way of outsourcing and the social support for those suffering short-term job loss. The current administration has a strong economic record in sup-

54

The stated facts have drawn sharp criticism from many groups, political or otherwise. Some, such as labor unions, interest groups like “Rescue American Jobs”, a large community of IT specialists, and many prominent Democrats have spoken openly and vehemently against the policy of outsourcing. “The Administration has chosen to stand against American jobs and American workers. We will continue to fight to keep American jobs right here at home,” cried the Senate Democratic Leader Tom Daschle in a February press conference presenting the Jobs for America Act.17 Democratic Presidential Candidate John Kerry also hinted at the prospect of stricter regulations to protect American jobs during his bid for presidency. Most economists, however, in recognizing the benefits of free trade and necessity of limited outsourcing have made other suggestions. A November 2004 Time panel interview drew suggestions from several reputable economists on addressing the shortterm job effects of globalization. Catherine Mann of the Institute for International Economics asserts, “We ought to have some kind of human-capital-investment tax credit, which would subsidize, for example, cross-training an IT person in another sec


tor, like health care.” Mann also supports the introduction of an income tax credit. Matthew Slaughter of the Tuck School of Business at Dartmouth College warns against the dangers of false job creation and inadequate employment; he also advocates further training and tax relief. “The median person in the U.S. labor force today has a high school diploma and about one year of post-high school education. That person is going to have a job, but how productive and how highly compensated is that job going to be? Maybe we could have tax cuts for less skilled Americans.”18 There have been efforts at programs to address the urgency of unemployment, though they have been under funded and inadequately executed. The Trade Adjustment Assistance Reform Act of 2002 aimed to address the problem more directly by expanding eligibility to more worker groups, increasing existing benefits available, providing tax credits for health insurance coverage assistance and increasing timelines for benefit receipt, training and rapid response. The act also legislated specific waiver provisions and established other TAA programs.19 Adjustment acts generally fund the benefits, training or tax breaks offered to those unemployed due to trade policies. The concept emerged in the sixties “to compensate workers for tariff cuts under the Kennedy Round of multilateral negotiations.”20 The benefits of this original legislation were extremely limited and resulted in a number of subsequent revisions such as in the Trade Act of 1974. Due partially to the shortfall of these programs and largely to the effort to solidify the North American Free Trade Agreement (NAFTA), Congress created, in 1993, the NAFTA Transitional Adjustment Assistance to support dislocated worker programs. Following the recession of 2001, Congress, once again, called for a revamping of the system. The Trade Adjustment Assistance Reform Act of 2002, which expanded the eligibility to include farmers, provides health care assistance and expands (from 52 to 78 weeks) the duration of cash assistance.21 Economists tend to be optimistic about the ability of these programs to address unemployment due directly to outsourcing. They recognize that the programs, if not a cure all, are a respectable starting point from which policy makers may observe the reactions and further address remaining problems. 22

The Bottom Line:

The United States faces the inevitability of competing in a global market. Outsourcing is a double-edged sword—both an irresistible tool with which American firms remain competitive and a partial cause of current unemployment. The answer to the predicament of short-term costs of economic globalization is neither trade restrictions nor other inhibitive legislation. Rather, the government should look to balance a liberal approach to trade with an equallystrong social support network through legislation, much like, but stronger than, the Trade Adjustment Assistance Reform Act of 2002. Perhaps, the best economic adjustment is quite simply to allow nature to run its course. Industries, by operating under selfinterest, should aim to keep outsourcing from producing supplier proliferation and market differentiation, and compromising production of competency and innovation. The fear felt by many is that this globalization is an indication that the United States is no longer the economic power house of “com-

petitive” force, as it once was. Paul Krugman deflates this fear by addressing the absurdity of the concept of aggressive economic competition in Competitiveness: A Dangerous Obsession. He confirms what the above argument suggests: globalization, and the success of other nation’s economies, does not come at the expense of American welfare.23 Overall, the argument for liberalizing trade restrictions and economic barriers is an easy one. The real resolution to outsourcing and unemployment is, admittedly, more complicated. Yet the fact remains, the global market does not operate on a zero-sum scale. Reconciling American interests and the global market demands is a very real possibility.

References:

1. Markusen, James and James Melvin and William Kaempfer and Keith Maskus. International Trade: Theory and Evidence. McGraw- Hill, Inc. Boston, MA. 1995. Page 182 2. Markusen, James and James Melvin and William Kaempfer and Keith Maskus. International Trade: Theory and Evidence. McGraw- Hill, Inc. Boston, MA. 1995. 3. Markusen, James and James Melvin and William Kaempfer and Keith Maskus. International Trade: Theory and Evidence. McGraw- Hill, Inc. Boston, MA. 1995. Page 182 4. “Benefits of Disappearing Jobs.” The National Center for Policy Analysis online. February 20, 2004. 5. Salvatore, Dominick. International Economics eight edition. John Wiley & Sons, Inc. Hoboken, New Jersey 2004. page 170. 6. “Think Globally, Act Locally.” Time: Board of Economists. November 2004. 7. US Department of Labor, Bureau of Labor Statistics. www.bls.gov 8. Ibids. 9. Non-Farm Business Productivity 10. Ibids. 11. Francis, David R. NBER Working Paper No. 7421. National Bureau of Economic Research. http://papers.nber.org/papers/ w7421. 12. Interview. Gail Leese. October 31, 2004. 13. “Bush Report: Sending Jobs Overseas Helps US”. Los Angeles Times and Chicago Tribune. February 10, 2004. 14. “US Economy Benefits from Outsourcing.” The Outsourcing Times (online). April 2004. 15. “Bush Report: Sending Jobs Overseas Helps US”. Los Angeles Times and Chicago Tribune. February 10, 2004. 16. Ibid. 17. United State Senate Democrats. http://www.democrats.senate.gov/ re cent2.html 18. “Think Globally, Act Locally.” Time: Board of Economists. November 2004. 19. US Department of Labor Employment and Training Administration, http://www.doleta.gov/tradeact/2002act_index.cfm 20. Baicker, Katherine and M Marit Rehavi. “Policy Watch: Trade Adjustment Assistance.” The Journal of Economic Perspectives – Volume 18, Number 2. Spring 2004. Page 240. 21. Ibids. Page 245. 22. Ibids. Page 253. 23. Krugman, Paul. Pop Internationalism. The MIT Press. Cambridge,

Massachusetts. 1996.

55


The Ethical Imperative of Retelling The Sinking of the Wilhelm Gustloff and Generational Silence Sarah Adee

Introduction:

Literature and psychoanalytic theory is connected in the United States by an existing and highly developed discipline of trauma theory. Trauma theory has not been widely applied to a reading of German contemporary literature. Contemporary German discourse at once cannot address and cannot avoid the continuing effects of the Second World War, especially insofar as its postwar literature is concerned. This article seeks to project existing trauma theory onto the plane of German post-postwar literature—that literature native to the third generation after WWII— through a critical analysis of Günter Grass’ Im Krebsgang (Crabwalk). Trauma theory can be useful in illuminating some of the ways in which postwar literature failed German culture. Connecting it with trauma theory in such a way is a project in literary criticism, not in history or cultural studies. Grass’ book can be argued to be an indictment of German postwar literature. One specific example of this indictment can be seen in Grass’ book: The treatment of the torpedoing of the civilian ship Wilhelm Gustloff as it carried 9000 civilians from the invading Soviet army to the safety of German land. Arguably the greatest naval disaster in history, it surpasses the Titanic and Lusitania, having claimed more than twice as many lives as both combined. Few people have ever heard of it. Crabwalk deals with the stories around the sinking of the Wilhelm Gustloff through the story of Ursula Pokriefke (Tulla), one its few survivors. The narration is done not by her, but by her son, Paul. He is an indirect survivor of the disaster. He was born immediately after the ship sank, after his nine-months-pregnant mother was rescued from the disaster. The stories in the novella include Paul’s life as a ‘68er (I will refer to the second postwar generational group henceforth as ‘68ers, a popular German term roughly on par, age-wise, with America’s baby boomers) in West Germany, Tulla’s back in the East, their relationship with Paul’s son Konrad, and the story of the events which led to the torpedoing of the ill-fated ship. These are not told linearly but are instead strung together like beadwork to create a whole narrative which calls forth—entirely between the lines—the gradual, corrosive effect of certain taboos in German postwar society. Because of these intertwining generational narratives, the novella’s structure is threefold. It addresses the sinking of the Gustloff, and the story of Tulla’s repeated attempts to tell the story of her life; at first to her unwilling son and then to her all-too-willing grandson. Trauma theory can connect the sinking of the Gustloff to the 56

questions of narration, generation and retelling, leaving open the possibility of the inheritance of trauma. Trauma theory essentially deals with a process of narration: it is applicable to the specific traumatic events which motivate Paul’s story. According to Cathy Caruth, trauma is “a crisis ... that demands our witness.” What does it mean to theorize around a crisis? “Such a question can never be asked in a straightforward way, but must ... be spoken in a language that is always somehow literary: a language that defies, even as it claims, our understanding.” Caruth’s description suggests a language which is not straighforward; an approach that is closer to a crabwalk. It is difficult to address in one book the intertwined ideas of täterschuld (perpetrator guilt), German literature’s abdication of responsibility, the rise of neo-Naziism in the third generation of Germans, and the sinking of the Gustloff. Grass’ choice of title—Im Krebsgang [Crabwalk]—refers directly to the approach Grass took to accomplish this intertwining; a certain crabwalk which is more akin to psychoanalytic approaches than traditional literary approaches. Trauma theory in particular is a specific method of approaching these issues with this same species of necessary indirectness. “But I’m still not sure how to go about this: should I do as I was taught and unpack one life at a time, in order, or do I have to sneak up on time in a carbwalk, seeming to go backward but actually scuttling sideways, and thereby working my way forward fairly rapidly?” Grass, Im Krebsgang. English translation Krishna Winston, 2003 Caruth and Freud have established that a victim of a traumatic event cannot himself have full access to the reality of the traumatic experience. Freud and Caruth also have often explicitly spoken of trauma using the language of illness and viral infection (or transference). A traumatic response may not limited to those who have a direct experiential connection with the event itself. The vector of transference of the trauma can be located in the act of re-telling. Freud spoke of traumatic memories and their repercussions—and their retelling—in medical terms which call to mind infection. “This is its danger-- the danger, as some have put it, of the trauma’s “contagion,” of the traumatization of the ones who listen. ... But it is also its only possibility of transmission.” (Caruth, 10)


The Traumatic Event:

“That sea there full of ice, and them little ones all floating there head down.” (Grass, 31) Cathy Caruth describes the traumatic event as a “breach in the mind’s experience of time, self and other.” (Caruth, 4) Seven thousand civilians were killed in the Gustloff disaster, and a great number of them were children. Many factors conspired to keep this event out of the official history of the second world war. Among theses was the implicit consensus that there was no such thing as a German victim of the war. Initially the Nazi regime kept the event quiet in order to avoid damaging German morale at the bitter end of the war. The Soviet government had its own public relations interest in not publicizing the murder of women and children. Eventually the story was muted as a result of the täterschuld taboo against speaking of Germans as victims. …[T]he wound of the mind—the breach in the mind’s experience of time, self and the world—is not, like the wound of the body, a simple and healable event, but rather an event that … is experienced too soon, too unexpectedly, to be fully known. (Caruth, 4) Tulla’s traumatic experience during the sinking of the Gustloff creates a breach—a permanent condition of separation from time; she is frozen perpetually inside that moment. Seventeen-year-old Tulla saw the bodies of the children floating in the icy January waters, legs sticking up into the air, because despite their life vests, the shock of the icy waters took away their ability to right themselves after the fall. Her formerly reddish hair became immediately and permanently white. This demonstrable breach with time, self and the world accommodates a Freudian reading of the symptoms and eventual result of trauma: The traumatic encounter with death; the inaccessibility of the actual experience, leading to an ingrown wound in the mind; the subsequent compulsive repetition of the event; and finally, as these symptoms of the disease become unbearable, the final resultant selfdestructive outcome. However, trauma is not locatable in the simple violent or original event in an individual’s past, but rather in the way that its very unassimilated nature – the way it was precisely not known in the first instance—returns to haunt the survivor later on. (Caruth, 4) This accounts for the compulsive repetition in Tulla’s retelling of the story of the ship. Having survived the disaster moments before giving birth to Paul, for the rest of her life Tulla implores him to publicize and pass on her experience of the disaster by re-telling her story. “You’ve got to write it down,” she implores her son, the journalist. “That much you owe us, seeing as how you were one of lucky ones and survived. Someday I’ll tell you the whole story, exactly what happened, and you’ll write it all down.” (Grass, 28) She tells him constantly that she will tell him about it one day, and that then, he will write it down. Seen as a traumatic event, Tulla’s experience of the Gustloff is “a crisis ... that demands our witness.” (Caruth 5) “It is this plea by an other who is asking to be seen and heard ... by which the other commands us to awaken ... that constitutes the new mode of reading and listening that both the language of trauma, and the silence of its mute repetition of suffering, ... demand.” (Caruth, 9)

Incubation, Taboo, and Silence:

Freud speaks of an “incubation period” for the period following a traumatic experience, a time of latency during which no symptoms are evident. Silence figures heavily in the trauma enacted in Grass’ work. Silence is closely related to trauma’s period of incubation, a relationship which becomes evident in the narrative. In this way incubation is implied by the complementary relationship of silence to the act of re-telling. Like silence, repetition is a major ingredient of the traumatic response. In this book, the incubation period is really a problem of the taboo, because the taboo is the force that prevents the listening of another: “[T]he inherent departure, within trauma, from the moment of its first occurrence, is also a means of passing out of the isolation imposed by the event: that the history of a trauma ... can only take place through the listening of another.” (Caruth 10-11) A family friend “serves up Mother’s admonitions for dessert: ‘My girlfriend Tulla still places great hopes in you. She wants me to let you know that it’s your filial duty to tell the whole world...’” (Grass, 29) “But I wasn’t willing,” Paul tells us. “No one wanted to hear the story, not here in the West, and certainly not in the East.” Throughout his life, Paul exhorts his mother, both implicitly and explicitly, to forget about her experience. Or, failing that, just to be silent about it. But “she still rambles on this way, as if buckets of time hadn’t flowed over the dam since then.” (Grass, 6) Tulla’s insistence that Paul retell her story forces Paul into a corner. Any decision he makes carries consequences. She wants him to tell her story, not the story. On the other side is the oppressive force of the whole of German culture, which demands that the story is told in a very specific and correct way. Paul’s resistance to telling the story at all is understandable. His refusal can be seen as an unwillingness to parrot a narrative which he is uncomfortable. Tulla’s lifelong allegiance to Stalin, as well as her open reverence for aspects of Nazi ideology, make Paul’s choice to ignore her almost a sympathetic one. But in the end, the problem is not in his refusal to parrot her story to the world, undigested. The problem is that he does not digest the story, even for himself. Grass’ book is among other things an account of a bystander showing us his refusal to listen to someone pleading to be seen and heard. The ethical responsibility to listen is also a risk of being infected by the traumatic memory. He does not necessarily fail his mother with his refusal to tell her story, but he absolutely fails his son. In being “right” he is also paradoxically, immediately wrong: telling his mother’s version of events undigested would have been a mistake. His failure to digest the events and develop his own narrative is exactly Grass’ allegorical critique of the ‘68er generation: Their total refusal to give their parents’ generation any voice (because of their abhorrence for the complex histories which lurked just under the surface of any narrative) allowed the narrative to continue undigested to an unwary third generation. Tulla’s constant, obsessive repetition of her story and the KDF ship is indicative of her casting about for someone to witness her story. The repetition is itself an instance of retelling, but it is fruitless. As Caruth says, “the force of [the] experience would appear to arise, precisely ... in the collapse of its understanding.” (Caruth, 7) In that respect, a witness is needed even to allow the victim of the trau-

57


matic event to understand her own trauma. True narration requires a listener, a witness. Tulla eventually finds her listener in Paul’s son, her morally undirected grandson Konrad. Paul’s refusal to listen, or to hear, or to re-tell, seems to represent the trauma’s incubation period. Konny, once “chosen,” in Freud’s language, to be the vessel, is infected, and soon begins to manifest the symptoms. According to Freud, a long period of latency characterizes the period between the initial event and the onset of symptoms. The transference takes place, and Konrad inherits the traumatic memory undigested. “This is its danger-- the danger, as some have put it, of the trauma’s “contagion,” of the traumatization of the ones who listen. ... But it is also its only possibility of transmission.” (Caruth, 10) Konrad’s infection by his grandmother’s traumatic memory of the Gustloff is immediate. He begins to obsessively research the history behind the event and stumbles upon accounts of the minor Nazi dignitary for whom the ship was named. He becomes obsessed with the literal Wilhelm Gustloff; not the ship, but the man. Konny agitates, on his web site, for the reinstatement of a Gustloff memorial, as well as the creation of a Gustloff museum in Gustloff ’s old house. The idea is preposterous since the man was famous only for being shot. Had David Frankfurter not murdered him, Gustloff would have faded into the soup of history without a bang or a whimper. His existence would have been forgotten. For this reason, to create “a museum to honor his life” (Grass, 170) is absurd; there is nothing to commemorate. The blank-slate-ness of Konny, however, his inability to remove himself to a critical distance from his grandmother’s narrative, is underscored by his medium of choice for broadcasting his grandmother’s story out into the ether: the computer Tulla buys for him is as constitutionally unable to interpret the information it is given as Konny himself. The story goes through Konny, undigested, unexamined, a verbatim transcription of Tulla’s experience. Konny becomes Gustloff. He is literally transformed into Wilhelm Gustloff in online discussions. He takes on the name as his online avatar. He argues from the perspective of Gustloff, as Gustloff, and when he meets Wolfgang, who has become just as obsessed with the history of the assassin David Frankfurter, the two engage in a kind of ritualistic dance in which neither are able to speak in their own words or their own ideas, but only engage in obsessive recreations of the boiling hatred which led to Gustloff ’s assassination. Freud has remarked on the sheer surprising literality of the repetition of trauma: “Modern analysts have remarked on the surprising literality and nonsymbolic nature of traumatic dreams and flashbacks, which resist cure to the extent that they remain, precisely, literal. … It is this literality and insistent return which thus constitute trauma and points toward its enigmatic core: the delay or incompletion in knowing, or even seeing, an overwhelming occurrence “ (Caruth 5) Konny becomes Gustloff and Wolfgang becomes Frankfurter; this is not a reinterpretation but a literal reenactment; almost a haunting or a possession. For his part, Wolfgang had taken on his role much earlier in his life: “Relatively early, at the age of fourteen, her son adopted the name David and became so obsessed with thoughts of atonement for the wartime atrocities and mass killings, which, God knows, were

58

constantly harped upon in our society, that eventually everything Jewish became somehow sacred to him. Last year for Christmas he asked for a menorah, of all things. That, at any rate, was her explanation as to why Wolfgang had represented himself ... as a person of Mosaic faith.” (Grass, 186) Wolfgang as David Frankfurter is the photo-negative image of Konny-as-Gustloff, his undigested German guilt—his täterschuld— having prompted his anguished identification with Jewishness. The causal nexus of memory and forgetting, knowledge and inability to know, which afflicts the victims of a traumatic experience, leaves open the possibility of the transmission of the traumatic memory, in changed form, into its new host. If the victim himself cannot fully know a trauma, then what is to prevent the recipient of the narrative from being infected with the trauma, as Freud says he was, by the dream of the burning child. Paul’s wholesale abnegation of his “filial duty” (32), whether or not that is an objectively valid expectation of a son, is the reason that the task is passed to his son Konrad. It was not only Paul who actively refused to listen; it was all of history. Both her son and the world at large have a stake in actively not witnessing Tulla’s story, effectively denying her experience: “…for years and years ‘you couldn’t bring up the Yustloff. Over here in the East we sure as hell couldn’t. And when you in the West talked about the past, it was always about other bad stuff, like Auschwitz and such.’ ” (Grass, 50) Any attempts on Konny’s part to tell this story outside the accepted parameters for German discourse are also silenced by authority figures. At the trial, it is revealed that Konny wrote a school report on what he learned from his grandmother, titled “The Positive Aspects of the Nazi Organization Strength Through Joy.” The report was summarily banned. “Word had to have reached Gabi that her son had been forbidden to give a report on a controversial topic because the subject matter was deemed ‘inappropriate.’”(Grass, 197) “Unanimously-- and in this respect in pan-German agreement- the two educators said that the banned reports had been severely infected with National Socialist thinking, which, to be sure, had been expressed with intelligent subtlety .... For reasons of educational responsibility it had been necessary to prevent the spread of such dangerous nonsense.” (Grass, 202) Konrad’s eventual deliberate and ritualized murder of Wolfgang is a literal recreation of the murder of Wilhelm Gustloff. The two boys are equally complicit in carrying out the details of the ritual. Freud’s “overwhelming occurrence” “remains, in its insistent return, absolutely true to the event.” (Caruth, 5) Konny is acting out the literal story, of not the Wilhelm Gusloff, but Wilhelm Gustloff the man, an ironic literality that very strongly calls to mind Freud when he speaks of the literality of the repetition compulsion. “It is this literality ... that possesses the receiver and resists psychoanalytic interpretation and cure. .. this scene or thought itself possesses, at will, the one it inhabits.” (Caruth, 6) Konrad’s deliberate and ritualized murder of Wolfgang is a literal recreation of the martyring of Gustloff. Konrad’s infection by the traumatic memory of Tulla is very much in line with Freud’s story of the burning child; Like the father of the dead child, Paul reveals to us over and over again his “repeated failure to have seen in time” (Caruth,108), and it plagues him throughout the book and past the end. (“It will never end. Never will this end.”) Again and again he peppers accounts of his interactions with Konrad with what he should have done had he only known. “I


could have asked questions on parents’ night, even if that had resulted in a confrontation with one of these narrow-minded pedants. I could have interjected, ‘What’s this about banning a report? Don’t you believe in free speech?’ ... But I didn’t make it to any parents’ nights.” (Grass, 198) In fact, Paul’s representation of the latency period of the trauma is only the case because of his total refusal to accept that responsibility. He passed it on instead, and in full, to his son. The infection took place as a direct result of Paul’s refusal to engage the story and do the work of assimilating it into his own life. He realizes only too late that the abnegation of this so-called “filial duty” had disastrous consequences for his son.

Conclusion:

Connecting Grass’ book with with trauma theory is a project in literary criticism, not in history or cultural studies. Grass’ book can be argued to be an indictment of German postwar literature, but Grass is not indicting German culture. A long-standing, unspoken cultural taboo forbids Germans from speaking of German victims of the war. A certain self-censorship has reigned for the entirety of the ‘68ers’ generation which makes the subject taboo for treatment in literature. The preferred and favored subject is täterschuld. Grass himself belongs to this generation. They are not, of course, responsible for the resurgence of neo-Nazi ideology at the margins. But Grass links this reemergence, at least in part, to a failure of narration on the part of his generation of writers. This failure is allegorically figured in the life of Paul. Like the father in Freud’s dream of the burning child, Paul can only realize again and again when it is too late; he can only see the missed opportunities. Like that father, Paul’s failure of seeing awakens him to a new reality, one which he now feels compelled to write, and to take on as his own responsibility. In this way Paul can be seen in Crabwalk as a stand-in for Grass himself. German postwar literature did not address the legacy of the second world war fully. Instead they embraced their täterschuld (perpetrator guilt) and were instrumental in ignoring Germans as victims of the war. They were instrumental in their silence about German suffering. Like Paul, they were right in some way to do this but in being right were immediately wrong. They refused to tell the stories of the first generation (the WWII generation) because Germans were not to be seen as victims. However, in distancing themselves and completely ignoring this aspect of the war, in not ever addressing or processing this integral piece of reality, they then passed it on undigested to the third generation. Grass seems to be implicating himself, as he is part of this generation. He seems to see something he does not like in the culture around him. He seems to be drawing a connection between this thing he does not like and the silence of the ‘68ers and their acceptance and undigested transferral of perpetrator guilt. The taboo of mentioning German war victims was always tied directly and immediately to holocaust denial. This self-censorship accomplished the desired ends for that one generation, but the law of unintended consequences is at work in the resurgence of neo-Nazi ideology shaped in part by the fact of the taboo surrounding re-telling of tragedies like the Gustloff. Trauma theory can link questions of generation, narration and re-telling to show that traumatic events can be inherited, and this is especially true of the body of post-war literature which abdicated its responsibility to create its own narra-

tive of a painful time in German history. This is not to say Crabwalk’s account of the Pokriefke family is a direct metaphor for the failures of the second generation of writers. It is merely showing an allegorical microcosm of how the narrative withdrawal of one generation may result in unintended consequences for the third. Neo-Naziism is making inroads once more into German culture, and its perpetrators are people whose sense of history has been warped by a lack of a full public discourse. As a result, tragedies like that of the Gustloff have not been retold; or more pertinently, they have been retold but only in the margins. The discursive center shies away from the topic of German suffering during. The tragedies of one generation have been silenced in literature. The law of unintended consequences has ensured that instead of politely going away when ignored, these stories are resurfacing and being used to reignite cultural problems which were hoped long gone. References: 1� Caruth, Cathy (1996) Unclaimed Experience: Trauma, Narrative and History. The Johns Hopkins University Press, Baltimore. 2� Grass, Günter (2002) Im Krebsgang. Steidl-Verlag, Göttingen.

59


Comparison on the Crafting ofPost-WWII Healthcare Legislation in the United States, Great Britain and Canada Jyoty Brar

Introduction:

Among the industrialized nations of the world, the United States is singular in that its federal government does not provide a system of universalized health care for its population. The United States provides health care programs like Medicare and Medicaid for limited portions of the public, such as the disabled, elderly and the poor. In an effort to understand the US healthcare system, it is necessary to see what it lacks in comparison to other countries. Great Britain, Canada, and other developed countries have created a socialized system of healthcare, which is not present in the US. Great Britain offers insurance to all citizens through the National Health Service; Canada does likewise through its National Health Insurance. Canada and Great Britain provide a good frame of reference from which to examine the US system of healthcare. Both countries are liberal democracies; although Canada and Britain follow a parliamentary system of national government, while the US is centered on federalism. In all three countries, the present archetype of health insurance came into existence through vigorous political action. All three had two factors in common: first, good timing allowed a favorable political climate, which is shaped by various interest groups and political majorities, to pave the way for the political entrepreneur, who is the second common factor. A political entrepreneur can be defined as an individual who takes advantage of the politically favorable circumstances of each country at a specific time and helps drive legislation through each country’s legislature. Each system of health insurance provided by the government in each of these nations has come about as a consequence of these different factors. The political entrepreneurs acted at the appropriate time, the elusive period which Jacob Hacker calls a brief “window of opportunity.”11 It was during this window that legislation was created and successfully passed.

Canada:

In 1963, national leadership in Canada changed to the Liberal Party with Lester Pearson as Prime Minister. Their election platform had been rearranged to include social policies in hopes of victory that year.10 Once in power, there were recommendations to nationally regulate health insurance, although this move was bitterly opposed by the Canadian medical profession. Although the party regained power, it failed to regain a Parliamentary majority in 1963. In 1964, a commission led by Justice Emmett Hall recommended that the federal government provide a medical insurance program

60

for the poor.19 He went strictly against means testing for the Canadian population or subsidizing the poor so that they could purchase insurance. Rather he felt the poor should be provided universal health insurance along with all other Canadians to create a system of welfare for all. Pearson called for a federal provincial conference in 1965 to promote discussion of proposals for a national health insurance model.10 While dealing with proposals, he wanted to incorporate several things: he wanted to make sure that the program would cover a comprehensive range of medical services, be universal, provide benefits portable from province to province and also provide benefits to be administered by a central public agency. Early on, there were objections from the province of Quebec. The cabinet continued despite opposition from provinces, doctors and the insurance industry.10 Opposition from physicians decreased from previous opposition during the passage of the Hospital Insurance and Diagnostic Services Act in 1957 once they realized that this new legislation would benefit them.9 Doctors were less likely to complain because they were guaranteed payment by new provincial plans to their liking. The most unsettling opposition was from the Minister of Finance Mitchell Sharp who disagreed with the legislation for economic reasons. This proved problematic as Sharp was influential and important within the Canadian government through his position as chief of finance. Even so, the Medical Care Act created through all this passed in the House of Commons by a vote of 172 to 2. The essential entrepreneur for the creation of the Canadian healthcare system included Lester Pearson, Prime Minster of the Liberal party. Pearson successively played off the “social mandate” the Liberal party had received after being elected on its social policyfocused platform. He successfully steered the bill through Parliament despite severe opposition from Mitchell Sharp, the minister of Finance. His role as a successful entrepreneur was vital in giving Canada a true system of universalized health care. After WWII, an important window of opportunity emerged for the passing of legislation as it was a time of social and economic change, leading to an increased role of the government in the daily lives of its citizens.19 However the passage of this act was not until 1965. At this time however, the Liberal party ascended to power, on the platform that social policies were important. This window of opportunity allowed the government led by Pearson to create necessary legislation. There were few veto points that could prevent the passage of the


bill, but the ones that did resulted in major changes to the plan. One of the most important was the role of provinces in the execution of the necessary programs. Originally, the federal government would have like to be in control of a national health care system and also in charge of the spending and finances. However, when the provinces refused to follow this plan, the national government was forced to give in and allow them to have more control over spending issues. If the provinces refused to execute a more universalized healthcare system, it would be difficult to actively make them do so. So the Canadian government provided generous subsidies and matching amounts to provinces in order to execute these plans. Slowly more provinces followed. The federal government lost much control over the project by maneuvering in this fashion. However it was still able to negotiate with states so that a form of national health insurance was passed. The success of the national health care legislation in Canada can be attributed to their government structure. As a parliamentary system, the majority in Parliament selects a government leder.19 The prime Minster is chosen from the majority party allowing a much easier pass of legislation.

Unitied States:

Major legislation in the United States which provided basic health insurance was enacted through Medicare and Medicaid. Strong opposition to government-sponsored health care has always existed in the United Sates by the American Medical Association and other interest groups. So how did the government create legislation that actually managed to claw its way through enormous opposition? According to Jacob Hacker, “institutionalization employment-based health insurance set in motion opposed but interwoven forces cementing the place of private insurance in the workplace yet creating powerful new pressures to assist those left unprotected.”11 It is under the lead of private health care that Medicare was developed. Despite the increase in health care coverage due to private employers after the Second World War, many people were still not covered: only 38 percent of the aged who were no longer working had coverage.16 Some did not have the money to afford insurance, some didn’t have appropriate jobs which would offer insurance, and some didn’t have employers who offered insurance.11 At this point in time, the old, disabled, and very ill had no means to afford insurance, nor could they work to make the money they would need. Another important source of healthcare, the Blue Cross, was no longer functioning the way it previously had been. Its original approach was to charge the same rates to those it covered.11 However, this philosophy changed as commercial, for-profit insurance providers came into the market: Blue Cross could no longer afford to continue with its original methods. It was forced to use “experience rating” in its approach to rates to those it provided coverage. In this way, a larger group would be safer to provide coverage to because the group would limit risk by spreading it among a greater number of people. Yet this approach also led to a decline in low-risk groups to participate in coverage. However, with this new approach, the elderly, disabled, and chronically ill who were not attached to at a large group (through employment) covered by Blue Cross and other groups were not covered. Insurers were afraid to provide coverage to those who were high risk, such as the previous groups because they

felt that this would increase adverse selection risks, threatening their own business success.11 The acceptability of a health care program that focused specifically on the elderly was further proven by American political belief that the elderly were a deserving group.18 There was no negative stigma attached to the group: the aged “could be presumed to be both needy and deserving...through no fault of their own.”16 Also middle-aged Americans were concerned about the ability to care for their own parents while paying for their children’s education, as a greater number of American youth were attending colleges and universities.20 With Social Security already set up, it made the setup of Medicare all the easier. As a successful program, the public grew to like, Social Security offered itself as an easy model to follow by lawmakers.18 Social Security worked by entitlement: paying with payroll taxes that were set aside specifically for the program while American citizens were working over the span of a number of years. In short they would “earn” the right to benefits, which was very important because “in a country traditionally skeptical of public assistance to ‘undeserving’ recipients, the notion that social welfare benefits had been earned was politically crucial.”18 As Social Security was seen justifiably by American society, it was easier to provide insurance to the elderly without the notoriety of a welfare program, which it was definitely not. In fact, a promotion slogan for Medicare was “health care through Social Security.”4 Through the OASI program of Social Security, Medicare would pay pensions to retirees and their survivors.18 The same elderly who qualified for Social Security who qualified for Social Security would also qualify for Medicare. In fact, about health of those over 65 were not covered by Social Security and thus ineligible.16 With the country’s mood shifting in a direction that favored Medicare, the perfect “window of opportunity” took longer to settle in, however not far behind. The 1952 election of President Eisenhower from the Republican party and the Republican majorities in both the Senate and House, turned politics in Washington to the conservative. Eisenhower was in fact elected on a conservative platform very much against the healthcare proposals that had currently been put on the table: it was against further government expansion. In 1954, Democrats may have gained control of both hoses, but they still did not have enough of a majority.18 Also problematic were the several committees that resisted Medicare proposals, including the Ways and Means of the House and the Senate Finance Committee. However, by 1957, the tide changed slightly for Medicare: Democratic representative from Rhode Island, Aime Forand, introduced a bill for consideration. The Ways and Means Committee started hearings for Medicare in 1958. But because the new chair of the committee, Wilbur Mills, opposed this bill (also known as the Forand bill), its run in committee was very unsuccessful and it died there.7 The one positive result of the hearings was that they further pushed the Medicare issue into the public spotlight. In addition to opposition from republican conservatives, the American Medical Association, the AMA began working against Medicare after the issue started receiving increased attention from Congressional hearings. The AMA, before the hearings and release of the bill, had simply ignored Medicare. However, in 1958, the AMA took a strong position against it. To once again attract conservative support, the group connected Cold War socialism to

61


Medicare: creating health care for the elderly could possibly open the taboo door of socialism.12 The AMA considered any form of health insurance, even entitlement programs for the elderly, as opening the door for national health insurance. Again the organization actively worked against Medicare by lobbying Congress, influence the public’s thoughts, and getting more interest groups on its side, such as the American Hospital Association and business organizations. The AMA raised public awareness of Medicare, though not necessarily to the negative side it had hoped for. Public meetings took place in thirty-eight cities between 1959 and 1961 sponsored by the Senate Subcommittee on Aging, chaired by Pat McNamara, which helped generate publicity.18 President Eisenhower stayed on the conservative against the program, although there was some pressure from Vice President Nixon who was concerned that leaving Medicare untouched might harm his chances of winning the presidency in 1960.7 Social pressures led Wilbur Mills to see that the passage of a bill like Medicare was inevitable. Senator Robert Kerr of Oklahoma pushed a counter bill to that of the liberals which would have expanded aid to states giving assistance to poor elderly.18 This bill, called the Kerr-Mills bill, would have put out federal matching grants of 50-80% of costs to those states that agreed to participate.18 The workings of the presidential election in 1960 helped to push legislation in Congress; Senator John F Kennedy, a member of the Senate Subcommittee on Aging, ran for presidency in 1960 for the Democratic party. The democrats pushed Medicare as a central theme for their platform. Again, publicity for the issue created additional pressure that helped push the Kerr-Mills bill into the fore front leading to its passage. The AMA agreed to help with Kerr-Mills, as they preferred it to Medicare.18 Although Wilbur Cohen, one of the designers of Medicare, attempted to add Medicare to Kerr-Mills the possibility was blocked by conservative opposition. In 1960, KerrMills passed in both houses of Congress and Eisenhower signed the bill to law. Despite its passage, Kerr-Mills was unsuccessful: by 1965, just five states had used 90% of its funds.16 The election of John Kennedy in 1960 gave Medicare the support from the president it needed, which it had lacked during Eisenhower’s time. However, Kennedy’s efforts were not successful. The conservative coalition of southern democrats and republicans in Congress were strong enough to block more liberal bills. Also, at this point, Wilbur Mills still disagreed with Medicare thinking it would create monetary and fiscal problems for Social Security.18 The victory of Lyndon Johnson in 1965 sealed the passage of Medicare.18 Medicare had clear support from the President and a large majority of liberal Democrats in both house of the legislative. The election provided a brief window of opportunity for liberals to take control and provide a brief hiatus to the long reign of the Conservative Coalition in Congress since the New Deal. Johnson’s administration decided to push the King-Anderson bill that had come out a few years earlier. It provided 60 days of hospital care and 60 days of nursing home care. Republicans, realizing the passage of Medicare was inevitable, decided to push their own version of Medicare. They criticized the Democrats for doing too little. Johns Byrnes, republican on the Ways and Means Committee, created a bill for voluntary coverage that would “subsidize the purchase of private insurance but the elderly” and the benefits were larger than Medicare, as they would cover phy-

62

sician visits and drug expenses.7 The AMA put out a separate proposal called Eldercare, which as a state-run program would subsidize the purchase of private health insurance for the elderly below the poverty line.18 However Eldercare was not taken very seriously in Congress and did not get anywhere. Wilbur Mills played an important role in the final legislation of Medicare. As chairman of Ways and Means in the House, he played an important role in the final passage of any bill relating to it. He decided to create a “package” that had not been though of before by other legislators: he would group together different alternatives into a “three-layer cake” which included: hospital insurance for the elderly (Medicare part A), a voluntary program of physicians’ insurance for the elderly (Medicare part B), and an expanded Kerr-Mills program of federal assistance for state medical services payments for the poor (Medicaid).”18 Mills went against the criticisms that he had once supported to create this piece of legislation. According to Wilbur Cohen, “Mills had taken the AMA’s ammunition, put it in the republican’s gun, and blown both of them off the map.”12 Lyndon Johnson signed the bill to law on July 30, 1965. Mills was a very important actor in the program. Clearly without his help Medicare and Medicaid would not have come out the way they did. He was the organizer of the “three-layer cake” of Medicare part A and B and Medicaid. As he was chairman of the Ways and Means Committee, without his support, it would have been extremely difficult, if not impossible for the passage of such a bill with the speed and ease that it did. As even Mills saw such a passage as inevitable, it would have still been difficult to try and pass such legislation without his support. During this time period, the Democrats held a majority in Congress and also the president, Lyndon Johnson, was a democrat. This led to a unique opportunity for the passage of many social policy bills under the umbrella of LBJ’s Great Society. It made the passage of a Medicare bill very likely. The conservative coalition held less than half the number of seats as they had in the past allowing for liberals to push forward with their programs. Around this point, there were few critical junctures which would have prevented the passage of this bill as the main opposition, Wilbur Mills, came to see passage of it as inevitable. However Mills’ role as chairman prevented passage. His switch from opposition to pro-Medicare created a momentum to come up with a bill and then pass it in Congress. The US system of separation of powers and checks and balances makes it difficult to pass legislation quickly. Even a small minority can prevent the passage of a bill, or at least make an effort to stall it. This is clear with the influence that Mills carried as chairman of the Ways and Means Committee, which is a major, influential committee in Congress.

Great Britain:

In Great Britain, the National Health Service Act of 1946 established the National Health Service, later implemented in 1948. The Second World War created many problems in England economically and financially. The postwar economy was affected by serious problems and it was not in the best financial state to approach such a bold government program.15 However, “psychologically and politically, the British people could not have been more receptive to the new


program than they were in 1958.” Demand was great and all political parties were in support of it.15 It was signed to law July 5, 1948 and was based on the need and inability of the masses to get insurance. The National Health Service (NHS) would be funded by the general public through taxes and administered by the Department of Health.17 Before this method, a “panel” system had been set up in 1911 by David Lloyd George. Even prior to the end of the war, there was a strong interest in government involvement in the health care system of the United Kingdom. Technology had advance: there was an improvement in antibiotics and anticoagulants, discovery of cortisone and the medical application of nuclear physics.15 Yet the biggest problem was cost. In June 1941, the Hose of Commons deiced to conduct a study of different methods of social insurance and allied services. It would be done by a committee of twelve headed by Sir William Beveridge. They put out the Beveridge report which showed that many people in the cities did not even have the income to get by for even minimum survival. There was an urgent need to expand ton include people not protected by health insurance and cover potential health risks that demanded care.15 The program emphasized a universalized approach to healthcare. It would ensure that for every citizen, there is available whatever medical treatment he requires in whatever form he requires it as well as an availability of insurance without regards to economic status or payment of insurance contributions to all who needed it.15 The report met little opposition in the House of Commons, yet there was substantial opposition from the British Medical Association, which was concerned about the possible phasing out of private practice for doctors.15 The report created momentum towards a universalized health care system sponsored by Great Britain for its citizens. The Brown plan was introduced by the Minister of Health, Ernest Brown, soon afterwards.1 It pushed for a salaried service of doctors, which was strongly objected to by the British Medical Association (BMA). The plan was dropped and Brown was soon replaced by Henry Willink. The Ministry of Health then began to hold conferences with doctors, voluntary hospitals, local authorities and other interested groups to get an opinion for the structure of the future health service.15 The result of these conferences was the formation of a White Paper, released in February 1944 to set a health plan into action.15 With the release of the paper, Parliament and the English nation would be able to discuss the issues it contained. The government could incorporate its features into bill of some sort and then submit it to Parliament for discussion, alteration and passage.15 The White Paper pushed for a central and local administration of the health service program.15 Responsibility for it would fall on the Ministry of Health, which would be advised by experts on the Central Health Service Council (created by the plan). The Minister of Health would be accountable to Parliament. The Central Medical Board would be established to manage day-to-day events in local committees.15 It would also employ doctors who would be paid at set salary; The White Paper, entitled A National Health Service, established health care as a right that ought to be guaranteed to all citizens. However there was substantial opposition to the plan by doctors. Many were concerned about whether or not local authorities should control medical service. In a poll, 80 percent of doctors voted that it should not happen.15 There was “much support for many of the specific reforms proposed but strong opposition to the White

Paper as a whole.”15 The British Medical Association was against the bill, citing desire to protect the economic and professional interests of its members.15 Opposition mainly came from the implantation of a salaried service.6 Also, there would be direct control over distributions of medical personnel providing medical services in Great Britain. The issue was at a standstill. At the very end of WWII, there was an increase in social consciousness leading to a new and stronger awareness of community in England such that “future problems would be tackled with the same sort of collective effort which went into the war itself.”6 After the war, England felt Winston Churchill wasn’t the appropriate choice to lead England through post-war politics as Clement Atlee, from the Labour party, was elected prime minister. As a landslide victory restoring the Labour party to power after conservatives had had a majority in Parliament, the election served as a social mandate for liberal reform slowly turning England into a welfare sate. An important actor in the enactment of NHS in Britain was Aneurin Bevan, who served as the Minister of Health in Atlee’s cabinet. He told an audience in 1945 during the election campaign, “We have been the dreamers, we have been the sufferers, now we are the builders. We enter this campaign at the general election, not merely to get rid of the Tory majority. We want the complete political extinction of the Tory Party.”2 After the Labour party’s rise to power in 1945, he was appointed by Atlee as Minister of Health and helped to play a vital role in the creation of the National Insurance Act of 1946 and the National Health Service of 1948, creating the NHS. Bevan was a strong socialist and very defensive of the NHS program.15 He preferred to “consult” rather than “negotiate” with other groups involved or interested in legislation, like medical personnel.15 However, he effectively made concessions with the British Medical Association, agreeing an end to salaried service and no pay beds in hospitals. Though the BMA was opposed to the program, it only took that position in order to protect the interest of its members. Once these concessions had been made on the part of the Labour government, the BMA actually wanted to expand the NHS and create an even more progressive program.15 The BMA had always been very reform-minded.1 The services offered by National Health Insurance were to the most part based on the Beveridge report and the White Paper. These services were almost free to the user, they were financed from centralized taxes, and everyone was eligible for this care, including people who were temporary residents or visiting the country.17 Its original setup had the following objectives: to have fourteen regional Hospital Boards which would fund and oversee hospitals, self-employed general practitioners, dentists, opticians and pharmacists.17 Many services were provided by local authorities. The National Health Service bill for Wales and England was called up before the House of Commons on April 30, 1946. Little change to the bill took place and it was passed with a better than two-to-one vote on July 25th of that year.15 When read for the third time to the House of Commons, again, there was little opposition. Any present was quelled by Charles W Key, Parliamentary Secretary to the Minister of Health who presented the bill to the House. This third reading resulted in an even greater margin of victory than previous.15 It was sent to the House of Lords for approval and the royal signature was affixed upon it so that the bill would become law. It was not launched however until July 5, 1948. In general, the National

63


Insurance Act of 1946 put more unemployment, sickness, maternity and widow’s benefits and better state pensions, which were funded by compulsory contributions by employers and employees.17 A government spokesman during a debate in the House of Lords said that the bill “is not the product of any single party of any single Government. It is in fact the result of a concerted effort, extended over a long period of years involving doctors, laymen and Government, to improve the efficiently of our medical service and to make them more easily accessible to the public.”6 Truly the program was as such: it may have been enacted and created into concrete legislation by the socialists. However it was not entirely their idea for the ideas behind it like the Beveridge report and the White Paper which came out before 1945 when the Labour party had just come to power. Various actors played a vital role in drawing opposition of the program to support it. Perhaps the actor with the most important role was Bevan, the Minister of Health. He drew the BMA towards the program, ending some of the largest organized opposition to an NHS in Great Britain. Without this opposition, it was much easier to secure passage. The end of WWII provided a perfect window of opportunity for legislation to be passed. It was passed a year after the end of the war. There was government interest in creating health care legislation. However at the end of the war, with the country in economic and financial distress, more citizens looked to the government to become more involved. With this perfect window of opportunity, government officials of the party like Bevan were confidently able to forge ahead with legislation. With this almost “social mandate” from the English public, even opposition to like the BMA slowly wore away. In England, there were few veto points that could prevent the passage of the bill. Perhaps the most obvious possibility of a juncture would have been continued opposition by the BMA to the point that it would eventually destroy a healthcare bill. However by making concessions, Bevan avoided further problems and instead got very strong support from the organization. Another reason for the success of the NHS is because of government structure in Great Britain. The English Parliamentary system contains few veto points, or places where the bill could have been killed off. The Labour party was in the majority in Parliament and hence was the party of choice to fill the positions of prime minister and cabinet. Thus, there was already a support network built in between the legislative and executive branches of the English political system. This unity allowed for easier passage of laws. Also, the prime minister and his cabinet were able to directly introduce their bills into Parliament for approval, making the legislative process much easier.

Conclusion:

By examining the histories of these two countries we can determine the different factors that led to the passage of two very different pieces of legislation under the general umbrella of government sponsored health care programs. The United Sates and Great Britain both had many things in common, but they took place at different times and under different circumstances. The political character and climate surrounding each piece of legislation was also different leading to differences in its implementation. Both had once particular actor who really took the reins and almost single-handedly created

64

legislation, or at least attempted to thrust it into the spotlight and got it passed. In the UK and US, the main actors were not the prime minister or the president, although both Atlee and LBJ supported their respective bills. The men in charge were Bevan, Minister of Health and Wilbur Mills, chairman, of the Ways and Means committee--not even of the same party as the president. The time periods of passage were different, but all occurred after WWII. For Britain, it was directly afterwards. In the US after the war, there seemed to be an increased perception amongst the public for the national government to take a more active role in the daily lives of its citizens. However, in the US, despite the passage of Social Security, there were many conservatives, including the conservative coalition who didn’t agree that the government should interfere too much with citizens’ lives. Yet with the Social Security program already set up for the elderly, it was much easier for passage to take place. History is important to analyze each of these three countries because it can let one know of the particular path each piece of legislation took on the road to passage into law. It can let us know of the critical junctures that took place, the path dependence and the actors involved to give the current system of healthcare. References: 1� American Medical Association. The British Health Care System. Chicago: American Medical Association Press, 1976 (Study). 2� “Aneurin Bevan.” Online. Available: http://www.sparactus.schoolnet.co.uk/TUbevan.htm. December 2, 2004. 3� Anthony McGrath, “The National Helath Service.” Online. Available: http://pegasus.cc.ucf. edu/~gropper/NHSOverview.html. December 2, 2004. 4� Ball, Robert. 1995. “Perspectivies on Medicare, What Medicare’s Architects had in Mind. Health Affairs 14, no.4:62-72. 5� David, Paul A. “Clio and the Economics of Qwerty.” Online. Available: http://www.pub. utdallas.edu/~liebowit/knowledge_goods/david1985aer.htm#_ftnl. December 2, 2004. 6� Eckstein, Harry. English Health Service. Cambridge: Harvard University Press, 1958. 7� Feingold, Eugene. Medicare: Policy and Politics: A Case Study and Policy Analysis. San Francisco: Chandler, 1966. 8� Gottschalk, M. 1999. The Elusive Goal of Universal Health Care in the US.: Organized Labor and the Institutional Straightjacket of the Private Welfare State. Journal of Policy History. 11 (4): 367-98. 9� Graig, Laurene A. Health of Nations: An International Perspective on U.S. Health Care Reform. Washington, D.C.: Congressional Quarterly, 1993. 10� Grey, Gwendolyn. Federalism and Health Policy: The Development of Health Systems in Canada and Australia. Toronto: University of Toronto Press, 1991. 11� Hacker, Jacob. The Divided Welfare State. Cambridge: Cambridge University Press, 2002. 12� Harris, Richard. A Sacred Trust. New York: New American Library, 1966. 13� Iglehart, John. 2000. “Revisiting the Canadian Health Care System.” The New England Journal of Medicine. 342(26): 2007-2012. 14� Immergut, Ellen. Health Politics: Interests and Institutions in Western Europe. New York: Cambridge University Press, 1992. 15� Lindsey, Almont. Socialized Medicine in England and Wales. Chapel Hill: the University of North Carolina Press, 1962. 16� Marmor, Theodor R. The Politics of Medicare. New York: Aldine De Gruyter, 2000. 17� NHS. “History of the NHS.” Online. Available: http://www.nhs.uk/england/ aboutTheNHS/history/default.cmx. December 2, 2004. 18� Oberlander, Jonathon. The Political Life of Medicare. Chicago: University of Chicago Press, 2003. 19� Wilson, Donna. The Canadian Health Care System. Printed in Canada, 1995 (Booklet put out by Health Canada).




Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.