THE TEXAS A&M UNDERGRADUATE JOURNAL
ALSO IN THIS ISSUE History, Biogeography, Creative Non-fiction, Biology, Nuclear Engineering, Economic Development, International Relations, Forensic Engineering, Visual Art
FALL 2011 | VOLUmE 3
Explorations wishes to thank the Association of Former Students for their invaluable financial support. Both AFS and Explorations share an interest in promoting the research and scholarly pursuits of our students in engineering, social science, humanities, science, business, and education. AFS continues to demonstrate their commitment to the academic interests and welfare of Texas A&M University. The publication of Explorations would not be possible without the generous sponsorship of the AFS. They directly facilitate our ability to showcase the best of undergraduate Aggiesâ€™ scholarly accomplishments. With the support of the AFS, Explorations will continue to present these outstanding achievements to Texas A&M, the state of Texas, and beyond. Thank you
Undergraduate Journal VOLUME 3, FALL 2011
STUDENT EDITORIAL BOARD Sarah Armstrong Annabelle Aymond Alifya Faizullah Bryce Gagliano Heedeok Han Paulina Martinez Jenny Russell Lupita Salgado Janvi Todai MASTHEAD DESIGN Andrea N. Roberts
PAGE LAyOUT/ DESIGN Sarah Armstrong Annabelle Aymond Lupita Salgado MANUSCRIPT EDITOR Gabe Waggoner COVER ART DESIGN Matt young
FACULTy REVIEWERS Gen. John Van Alstyne Dr. Sarah Bednarz Dr. Ben Crouch Dr. Sumana Datta Dr. John Ford Dr. Ed Funkhouser Dr. Larry Griffing Prof. James Hannah Dr. Bruce Herbert Prof. Karen Hillier Dr. Janet McCann Dr. Rita Moyes Prof. Mary Ciani Saslow Dr. Roger Schultz Dr. Susan Stabile Dr. Elizabeth Tebeaux Dr. Kimberly Vannest
FACULTy STAFF ADVISORy BOARD Dr. Sumana Datta Dr. Ed Funkhouser Dr. Larry Griffing Ms. Angela Hines Dr. Duncan Mackenzie Ms. Tammis Sherman Dr. Elizabeth Tebeaux explorations.tamu.edu Henderson Hall 4233 TAMU College Station, TX 77843-4233, U.S.A. Copyright © 2012 Texas A&M University All Rights Reserved
Letters from Faculty My rationale for wanting to resurrect the Undergraduate Journal of Science as the Journal of Undergraduate Research emerged from one main concern: If TAMU wants to encourage undergraduate research, a journal devoted exclusively to undergraduate research would provide a venue for the best of A&M student research. As a development tool, Explorations requires students to understand that knowledge cannot exist unless it is communicated : The larger the audience for research, the greater the impact of new ideas and knowledge emanating from research. As a veteran teacher of technical communication, I believe that students involved in research need to learn how to explain what they have discovered to a broad range of readers. If our students can’t, then they really do not understand their research. Explorations should be a testimony to the high quality of thought and writing that a land grant institution like TAMU seeks to develop in its undergraduate researchers. Dr. Elizabeth Tebeaux Professor of English Texas A&M University
Undergraduates should have the opportunity to explain their work in a venue that can be widely disseminated and referenced. In helping to establish Explorations, I wanted our undergraduates to “own” it, while addressing concerns by faculty mentors that publishing in Explorations would compromise the opportunity to publish data in professional peer-reviewed journals. For ownership by all undergraduates, the journal includes on-going scholarly activity in the humanities and liberal arts, as well as in the sciences. To reserve most of the data for publication in professional journals, the format emphasizes the importance of students explaining their work to a more general audience (prospective employers, peers, mentors, and Mom). Contributors are guided by an outstanding student-run editorial board, publication team, and a student and faculty peer review process. Explorations has the potential to grow beyond Aggieland and become a publication paradigm in undergraduate education and scholarship for our nation and the world. Dr. Lawrence Griffing Associate Professor, Biology Texas A&M University
Almost three years ago I joined a group of faculty who were exploring means to provide the many students at Texas A&M who were engaged in undergraduate research a forum to publically display the exceptional student work that is done on campus every year. Although there had been studentcentered, student-led scholarly journals previously, most had focused on a particular area of inquiry. We envisioned a publication that would represent all areas of scholarly work on campus, from the laboratory to the world outside and from deliberate experiments to acts of creation. Three years later, Explorations has become a student-run, faculty supported publication with a reputation for presenting outstanding work from all sides of our campus. Just as importantly, Explorations articles are written for a general audience, so that the entire Aggie family and beyond can learn about the amazing things our students do as part of their education. Dr. Suma Datta Executive Director, Honors and Undergraduate Research Associate Professor, Biochemistry and Biophysics Texas A&M University
Explorations | Fall 2011
About the Board
Sarah Armstrong is a junior, majoring in Economics and Political Science, with a focus in International Relations. She has interned abroad, educating youth about violence against women, and, as a member of the MSC Wiley Lecture Series, helps to produce programs that highlight topics of global significance. Upon graduation, Sarah plans to attend graduate school and hopes to work in the fields of public policy and journalism. In her spare time, she is an amateur photographer and enjoys running, poetry, and yoga. Annabelle Aymond is a sophomore Telecommunications and Media Studies major. Annabelle enjoys portrait photography, graphic design and has a particular interest for languages including Japanese and German. She wants to pursue a professional career in photojournalism and advertising.
Alifya Faizullah is a Senior Nuclear Engineering major. When she is not studying or working on Explorations, Alifya enjoys painting and astronomy. She hopes to work in the nuclear industry as a power plant operator, or provide solutions in fuel cycle and waste management. Bryce Gagliano graduated in May 2011 with a degree in Biology and a minor in Neuroscience. He plans to make a career in medicine, and is currently a research assistant at University of Texas Health and Science Center in Houston in the department of Microbiology and Molecular Genetics. When not working and studying, Bryce enjoys theater, endurance sports, oceanography, and scuba diving.
Heedeok Han is from Keller, Texas. He graduated May 2011 with a degree in Biomedical Science. He is interested in studying medicine and is currently studying for the MCAT. He is currently spending time with his family in South Korea. When he is not studying for the MCAT, Heedok enjoys playing basketball.
Paulina martinez is a senior and is double majoring in Sociology and Spanish. She hopes to work for a non-profit organization improving public education in a foreign country. Her hobbies include travelling and web design. Jenny Russell is a senior International Studies major on the Politics and Diplomacy track, focusing on Russia. She is interested in public policy and hopes to join the Foreign Service. Outside of school, Jennyâ€™s interests include training hunter/jumper sporthorses, ballroom dance, martial arts, and Russian literature.
Lupita Salgado is a senior Communication major and Human Resources Development minor. After graduating in May 2012, she plans on attending graduate school to pursue a Masters in Human Resources Development. In her spare time, Lupita enjoys mentoring, reading, going out for ice cream, and playing tennis.
Janvi Todai is double majoring in Business Honors and Finance. She hopes to be a physician and plans to attend medical school upon graduation. Beginning January 2012, she will be studying abroad through an exchange program in Copenhagen, Denmark and will be graduating the following May. In her spare time, she plays the piano, paints, plays tennis, reads, and practices yoga.
Can Home Environment Impact Infantsâ€™ Motor Development?
Assesing infantsâ€™ home environment could optimize motor development. By Alyssa Blessing
Biogeography and the Blue Jean Dart Frog
A study in Central America investigates the differences between two populations of blue jean dart frogs. By Kevin Stiles
Advancing Cancer Research Through Canine Osteosarcoma
A promising new cancer treatment tested in dogs may soon be appliable to humans. By Leslie Swirsky
From a Different Perspective Photography allows us to see the world more closely and view it from a new perspective. By Eesha Farooqi
National Economic Development and Conflict Behavior: Lessons for Syria Abnormally high or low economic development in a country may make war inevitable. By Anas Al Bastami
Table of Contents
explorations FALL 2011 Can Female Swordtail Fish Smell a Fight?
In swordtail fish, the attractiveness of male pheromones to females may be affected by the amount of male-male aggression. By John Wilson
Explorations | Fall 2011
An artist draws on Mayan mythology to create a piece expressing her personal world view. By Blanca Tovar
Snow Leopards in Western Mongolia: A Study of Population Density Scat samples in Western Mongolia help determine the population density of snow leopards. By yvette Halley
Project CORONA: The Journey to the Worldâ€™s First Spy Satellite
Recent declassified documents shed light on political concerns behind Soviet-era spy satellite technology. By David Glasheen
Net Neutrality in the 21st Century
The changing face of Internet regulation may affect future access. By Robert Scoggins
Man and Boat: A Voyage in the Art of Boatbuilding
A maritime scholar experiences part of nautical culture by building his own Banks dory. By Brett Lindell
On the cover
Climate Change: Looking for Answers in Forest Soil
Fertilizers may affect the way forests control atmospheric carbon dioxide by influencing microbial respiration in the soil. By Justin Whisenant
p. 45 8
Preventing Tube Failure in a Nuclear Reactor
A team studies a rare but dangerous type of nuclear accident. By Christopher M. Chance, Jordan Green, Alan E. Lee, Chris Pannier & Robert J. Seager, Jr.
Preventing the Spread of Valley Fever
Researchers use a mathematical model to show how different burial practices for dogs affect the prevalence of Valley Fever. By Amy Clanton, Laura Harred, Chris Jones, & Devin Light
This is Not a Lizard
An artist uses a scratchboard to create art with both realistic and conceptual elements. By Jacob Patapoff
Are Oil, Corn, Cattle, and Salmonella All Present in Our Dinner Plates? Cattle feeding practices may affect the presence of Salmonella in meat products. By Santiago Ramirez
Applying Forensic Engineering to the Construction Industry Poor infastructure has led to fatalities in the past, but analysis of these failures may improve future projects. By Robert Pinkston
Table of Contents conâ€™t
explorations FALL 2011 Memorable Design Logo
An analysis of effective logo design shows its impact on business By Lori Lampe
Meet the Authors
Explorations | Fall 2011
Can Home Environment Impact Infants' Motor Development? Proper stimulation in an infant’s home environment can affect its ability to learn new motor skills. These affect motor, cognitive, and academic development. The Affordances in the Home Environment for Motor Development - Infant Scale (AHEMD-IS) helps measure opportunities for optimal motor development, learning, and growth. By Alyssa Blessing
ithout proper stimulation from the environment and opportunities to learn new motor skills, infants could face challenges and delays in their motor, cognitive, and academic development. To develop fully and live a functional, long life, every infant and child must acquire motor skills. The level of motor development is a crucial factor in child behavior.1 Research in motor development suggests that “environmental stimulation plays a critical role in optimal human development during the early stages of life.”2 Recently, researchers from the Motor Development Lab at Texas A&M University developed the Affordances in the Home Environment for Motor Development— Infant Scale (AHEMD-IS) in partnership with the Neuromotor Development Lab at UNIMEP, in Brazil. The purpose of this assessment is to gather information about infants’ (aged 3–18 months) home environment, focusing on the opportunities each infant has to develop a variety of motor skills. “Affordances” are defined as “opportunities that offer the individual potential for action, consequently learning and developing a skill or a part of the biological system.”3 Infants experience learning and growth through affordances in their everyday activities and environment by playing with toys, crawling around furniture in the home, and interacting with nurturing adults and other children. The AHEMD-IS was designed to be a parent-friendly home assessment. Parents or caregivers answer various questions about infant characteristics (e.g., birth weight and whether the infant was born prematurely) and family characteristics (e.g., housing type, how many other children and adults live in the house, and the education level of the father and mother). The rest of the questionnaire poses questions addressing physical space (outside
and inside), daily activities, and play materials. Specialists analyze the responses to determine whether the infant is receiving adequate affordances for optimal motor development. Previous research, analysis, and comparison to earlier tools have validated the AHEMD-IS as an instrument to evaluate infants’ home environment. The instrument can be used and applied in a wide variety of settings. Not only can it be used to examine how environmental conditions affect the development of a variety of motor skills in infants, this scale can also help parents give their children adequate and beneficial stimulation. Furthermore, the AHEMD-IS can help improve relationships between parents and medical professionals and therapists as well as teachers.4 AHEMD-IS results can help predict future motor delays and problems and promote readiness for school, especially in atrisk infants. Methods
Since I joined the Motor Development Lab in the Texas A&M Health and Kinesiology department, my job has been to help oversee the refinement of the AHEMD-IS. This process involved analyzing feedback from experts. We sent a letter explaining the purpose of our project and an evaluation form to 38 researchers and clinicians in infant motor development. Our evaluation form included a rating scale5–6 to evaluate each question on several bases: • • •
Clarity (the item is well written; what is asked is clear to parents) Importance (the item is relevant and reflects the aspect to be assessed in the home environment) Discrimination (the item has the potential to discriminate home environments that
• • •
offer more or less the same opportunities to infants’ motor development) Ease of observation (the item represents relatively common situations, easily observable by parents) Cultural conflict (the item does not present cultural conflict) Orientation to parents (the item has the potential to inform parents how to promote opportunities that stimulate motor development in the home environment)
We also encouraged experts to offer additional anecdotal suggestions. To help create the best possible final version, I organized, ranked, and scored all the feedback from the evaluation forms. Results
Seventeen experts responded to our request for feedback, giving us a 45% response rate. To refine the AHEMD-IS, we reduced the number of questions from 48 to 41 and clarified the wording of selected questions. For example, we removed a question asking about household income from the Family Characteristics section because experts felt that the inquiry was not relevant to what we were aiming to evaluate in the scale. For all questions, we changed “we” to “I/we” to be sensitive to single-parent families. In the section regarding Play Materials, we added or removed pictures and figures to make each example as clear as possible. To the beginning of each section, we also added a short note to the parents to clarify the purpose of the Infant Scale. The general consensus of expert opinion was that the instrument “holds promise for advancing science with regards to environmental factors that should be of interest in helping children acquire motor skills” and “will address a need
childcare when the time comes. Finally, I want to continue participating in research that will help infants who are at risk and at a disadvantage for developing normally. I want to help give them the greatest opportunities for intervention and prevention of problems. Research such as this project and many others in the field are crucial for the progression of my chosen future profession, pediatric physical therapy. Acknowledgments
I thank the Motor Development Laboratory at Texas A&M, the Neuromotor Development Lab at UNIMEP in Brazil, Priscila Caçola, Denise Santos, and Carl Gabbard. References 1.
and help direct interventionists, families, infant caregivers, and others.” Discussion
We designed this instrument to help therapists, educators, and families create the most nurturing environment to help infants develop and acquire motor skills. As experts pointed out, the instrument holds promise for regular household use and clinical use. Many possibilities exist for using the AHEMD-IS. For example, research using the Alberta Infant Motor Scale to examine the relationship between the home, nursery school environment, and motor development of 6- to 18-montholds demonstrated that infants whose homes had higher AHEMD-IS scores displayed sig-
nificantly better motor scores; 19% had lower home scores and lower motor scores.7 More recently, the Motor Development Lab at Texas A&M found that opportunities in the home environment predict motor development, especially in the category of Play Materials.8 We believe that using the AHEMD-IS can help prepare families to provide adequate environments early in an infant’s life, leading to early intervention and treatment for those infants and children who may be more likely to develop below-average motor skills. The pictures and figures of the Play Materials section show parents what kinds of toys and objects can stimulate optimal growth and motor skill development. Also, with a stimulating home environment, infants and toddlers can be more ready for interactions in school and
Gabbard C, Caçola P, Bobbio T. Studying motor development: a biological and environmental perspective, pp. 129–139. In E. Kahraman and A. Baig (eds.), Environmentalism: Environmental Strategies and Environmental Sustainability. Hauppauge, Ny: Nova Science Publishers, 2009. Askari P, Haydari A, Nezhad M Z. Relationship between Affordances in the Home Environment and Motor Development in Children Age 18-42 Months. Journal of Social Sciences 2009; 5(4):319-328. Hirose N. An ecological approach to embodiment and cognition. Cognitive Systems Research 2002;3:289–299. Gabbard C, Caçola P, Santos DCC. Avaliação e Intervenção do Ambiente Domiciliar para o Desenvolvimento Motor Infantil. [Assessment and intervention of the home environment for child motor development.] In N. Valentini and R.J. Krebs (eds.), Intervenções [Interventions.] In press. Harris S, Daniels L. Content validity of the Harris Infant Neuromotor Test. Physical Therapy 1996;76(7):727–737. Habib E, Magalhães LC. Development of a questionnaire to detect atypical behavior in infants. Revista Brasileira de Fisioterapia 2007;11(3):177–83. Schobert L. Motor development of infants in day-care centers: a view on different context. Master’s thesis. Porto Alegre: Federal University of Rio Grande do Sul, 2008 Caçola P, Gabbard C, Santos DC, Batistela AC. Development of the Affordances in the Home Environment for Motor Development—Infant Scale (AHEMD-IS). Pediatrics International 2011 Apr 20. doi:10.1111/j.1442-200X.2011.03386.x. [Epub ahead of print.]
Explorations | Fall 2011 11
Biogeography and the Blue Jean Dart Frog
Investigating the differences between two populations of blue jean dart frogs can offer insight into the causes of variations within species. Quantifying differences becomes the first step in this biogeographical study, which could eventually assist in conserving biological diversity. By Kevin Stiles 12
iogeography—the study of the distribution of the world’s species—aims to conserve existing habitats, a goal becoming increasingly important as world biodiversity dwindles. Studies conducted in many habitats investigate the ecological forces constantly driving evolution and speciation. An understanding of these forces will give us insight into species diversity and help focus our conservation efforts. The blue jeans dart frog (Oophaga pumilio) is a common and recognizable Central American amphibian. The species ranges from Nicaragua to Ecuador in the south, and the frogs are most common in Panama and Costa Rica. Adults are characterized by their vibrant redorange body color and blue legs, hence the moniker “blue jeans.” Most studies conducted on the frogs in Costa Rica have examined patterns of reproduction and territorial behaviors; however, rigorous comparative studies in the region are few. Methods
I sought to compare two populations of the blue jeans dart frogs by studying body color patterns and microhabitat differences between two different regions of Costa Rica. I observed the frogs over a 2-week period in July 2010. My procedure was to catch, measure, photograph, and release frogs. I also recorded details about the microhabitat (the specific location where the frog was observed). The goal was to quantify differences in size and coloration pattern to understand the extent that populations of the same species can differ, even if separated by only 100 km. The tropical forests of Central America embody some of the world’s greatest biodiversity. As the human population continues to expand, these tropical rainforests become increasingly important. Costa Rica alone, home to more than 1,190 species of terrestrial vertebrates, has approximately 4% of the total world species.1 Many areas of Costa Rica have been designated biological reserves, whereas others are well-established research centers. I conducted this research in two areas: the Children’s Eternal Rainforest and the La Selva Biological Station. The Children’s Eternal Rainforest is a reserve in central Costa Rica and covers a large area of forest with a wide range of habitats and elevations. Here, Texas A&M established the Soltis Center in 2009 as a place for student research in tropical ecology. The La Selva biological station, in the Heredia province, on the Caribbean coast,
contains several different forest ecosystems and is a premier location for tropical research. My research targeted observed differences between populations of these frogs at La Selva and Soltis, two geographically connected yet strikingly different forest ecosystems. These tropical rainforests offer great insight into the diversity of organisms, particularly within a species. Recording detailed variations between the populations may help us understand the extent to which regional geography can create diversity, even within the same species. Such studies may help clarify the ecological forces that drive evolution and give us targets for conservation efforts in the future. Results
I saw and described 8 frogs at the Soltis Center location and 13 at La Selva. A striking difference between the two populations was the presence of a vibrant light-blue color on the underside of some of the frogs (Figure 1). Although the body was a bright red-orange and the legs a dark blue, large splotches of light blue covered up to 75% of the underside of some frogs and sometimes the underside of the limbs. However, this surprising coloration was not prevalent in the Soltis population (the few frogs that had it had only small, almost unnoticeable patches). The La Selva frogs, however, had striking large splotches and streaks of light blue covering their undersides, and a few of them had lightblue areas under the limbs and on the hind feet. The spotting on the frogs also varied
Explorations | Fall 2011 13
Figure 1: A vibrant light-blue color on the underside of some of the frogs demonstrates a striking difference between the two populations.
Figure 2: The Soltis population
Figure 3: The La Selva population
Figure 4: The juveniles did not have the vibrant coloring like the adults and had too many spots to count.
greatly. Spotting refers to small black spots on the frogs, usually on the lower back side but often diffusing up toward the head. The Soltis population generally displayed little spotting; for those that did, the spots were easy to count (Figure 2). The La Selva frogs, however, had extensive spotting that sometimes appeared to be a mottling on the frog’s back (Figure 3). In fact, only one frog at this location had no spots. Some frogs that I found were juvenile frogs, which had different characteristics (Figure 4). These juveniles did not have the vibrant coloring like the adults and had too many spots to count. I excluded juveniles from calculations of average frog length and from comparisons of overall coloration patterns. Discussion
The blue jeans dart frog is a species that may offer insight into what causes variation within a species. The two populations, though separated by only around 100 km, displayed many different characteristics. My goal was not to determine the exact forces that caused these differences but rather to observe and record the differences between the populations. The average length of the frogs (excluding juveniles) at Soltis was 22.02 mm, whereas the average length at La Selva was 19.45 mm. Many factors may account for the difference. The forest surrounding the Soltis center experiences much less human interaction than the forests at La Selva, and the Soltis frogs probably have larger living ranges less constricted by other populations. Food availability also may simply be greater at Soltis than at La Selva, or the diet might be more substantial. La Selva’s elevation across the park ranges between 60 and 120 m, whereas Soltis is located at a much higher elevation and has a larger range in elevation throughout the reserve. The trails at Soltis were also carved into the mountainside, so the habitat was much steeper and offered fewer suitable habitats for the frogs. These findings might indicate that the blue jeans dart frog is more successful at lowerelevation forests. If the frogs at La Selva find that habitat more suitable, then we can evaluate the coloration differences with a new perspective. The blue jeans frog is a striking red, sometimes red-orange, color with a deep blue color on the limbs. The presence of light blue on the underside was much more common in the La Selva population than in the Soltis population. However, only frogs from Soltis had yellow coloring. Although random genetic mutations over time might account for these differences in color pattern, they are more likely due to differences in habitats between the two locations. Lightblue coloring could indicate a more successful population. The Children’s Eternal Rainforest and La Selva are two different ecosystems. The frogs have probably adapted different body color needs through the generations to suit their particular environment. These differences
may not seem important, but they are striking. Populations of a single species of frog in such a small region have split and developed separate and easily discernible coloration patterns. We cannot tell whether these populations drifted apart recently or diverged long ago, but these blue jeans frogs have demonstrated how ecological forces can drive a species to diversity. The variations between two populations in connected, yet different, tropical ecosystems are remarkable, especially in these blue jeans dart frogs. Long-term research into the specific differences between the two ecosystems would yield more information on the cause of the variations observed. Further comparative studies will give us a better picture of the diversity of this planet. A greater understanding of biogeography, especially in such diverse areas as the tropical forests, will help us conserve the great diversity of this planet. Reference 1.
Donnelly MA. “Amphibian Diversity and Natural History.” In La Selva: Ecology and Natural History of a Neotropical Rain Forest, edited by LA McDade et al., pp. 199–209. Chicago: University of Chicago, 1994.
Advancing Cancer Research Through Canine Osteosarcoma Because similarities exist between canine and human osteosarcoma, new treatments may be tested in dogs. One such treatment, direct injection of a radioactive isotope into the tumor, has the potential to have fewer side effects and better results than traditional chemotherapy. By Leslie Swirsky
ach year, approximately 900 new cases of osteosarcoma are diagnosed in the United States; about 400 of these occur in children. Osteosarcoma is the sixth-most-common type of cancer in children and is one of the few that begin in the bone. Like other cancers, osteosarcoma can metastasize, or spread, to the lungs or other bones. The current treatment for this disease in children consists of chemotherapy (using drugs to shrink and kill cancer cells), then surgery to remove cancerous cells and tumors, and finally radiation therapy to kill any remaining cancer cells and minimize the chances of the cancer returning. Though this approach treats osteosarcoma fairly effectively, the side effects of chemotherapy and radiation therapy can be strenuous. The treatment can decrease the number of white blood cells, making patients more susceptible to infections and sickness. The researchers at IsoTherapeutics Group (ITG), the Gabriel Institute, and the Texas A&M Institute for Preclinical Studies (TIPS) are developing a treatment for spontaneous osteosarcoma that uses small doses of a radioactive isotope injected through the bone, directly into the tumor. If successful, this treatment can be a quicker, more effective way of removing cancerous cells without chemotherapy’s harmful side effects. Osteosarcoma and Canines
Dogs develop osteosarcoma at anatomical sites analogous to those of humans, have identical histology (cell anatomy),
respond similarly to traditional treatment such as chemotherapy, and have diseases with the same tendency to spread. Many genes implicated in the progression of osteosarcoma in children are also present in the canine form of the disease.1 Osteosarcoma is also more prevalent in dogs than in children. ITG and TIPS have begun testing their treatment on dogs because the disease is similar genetically and physically to that in children. The dogs used for treatment developed the osteosarcoma naturally and were brought to TIPS for treatment from all over the United States. PET–CT Scanning
Once a dog arrives, positron emission tomography–computed tomography (PET–CT) images are obtained to determine the tumor’s exact location. These images come from a medical imaging device that combines a PET scan and an X-ray CT scan. The images acquired from both devices can be taken sequentially in the same session and then combined. To more accurately locate tumors with imaging, researchers inject a modified form of glucose called fluorodeoxyglucose (FDG). In normal biological function, glucose is chemically changed into glucose-6-PO4, which sets up a variety of reactions that fuel different cellular processes. FDG, though not exactly the same as glucose, follows the same reaction steps. After FDG is chemically modified, it stalls in the metabolic pathway and isn’t consumed further. The FDG thus builds up in cells that consume it at high rates (such as tumor cells), giving a stronger signal at those locations.2 Imaging the distribution of
FDG can therefore estimate the overall consumption of glucose. Because most cancers are hypermetabolic (meaning they use a large quantity of glucose), locating tumors through their glucose metabolism signature is an effective approach. Localization of Y-90
Once the tumor is located and if it is in a treatable area, a proprietary formulation of yttrium-90 is prepared. y-90 is a radioactive isotope of the transition metal yttrium with a half-life of 64 hours and maximum beta energy of 2.27 MeV (megaelectron volts). The y-90 formulation is injected directly into the tumor, where the y-90 will decay, giving off beta particles (energetic electrons). y-90 has a maximum beta range of 11 mm, so 50–70 holes are drilled through the bone and into the tumor. To cover the entire surface area, researchers inject each hole twice. If enough decay occurs, the beta particles given off will yield enough energy to kill the tumor cells. Because of its short half-life, y-90 theoretically lasts long enough to destroy many of the cancer cells but not long enough to severely damage surrounding healthy cells. However, as currently used, this procedure allows the radioactive y-90 to migrate to all parts of the body. ITG is currently perfecting a formulation with y-90 that will “stick” where it is injected in the tumor, killing only the harmful cancer cells. As an intern at ITG, I have been involved in making and testing the y-90 formulation. Jim Simon, Keith Frank, and the other researchers at ITG have come up with
Explorations | Fall 2011 15
several ideas on how to make a formulation that will affect only the tumor and not travel through the body. A “recipe” is created on the basis of theories and past research, and my job is to make and test the formulation. Testing involves injecting the mixture into the muscle tissue of the left hind leg of two mice and taking images with a gamma camera (a device used to image radiation emitting radioisotopes). The mice are then imaged every day for about 2 weeks. The activity in the mouse is measured along with how much radioactivity the mouse excretes. These data help determine whether the formulation is staying in the muscle tissue or moving into the bloodstream and being filtered by the kidneys. If the radioactivity is staying in relatively the same spot, it can then used on the next canine patient. Results
The dogs were monitored after 4 months with PET–CT to observe the effects on tumors. So far, the treatment has suppressed the tumor. However, radioactivity is present in the excrement of the dogs, so some of the radioactive material is migrating from the target area into the bloodstream. We seek to improve the y-90 formulation to prevent this migration of radioactivity, but as the resulting images from the dogs show, the current formula kills
the tumor cells. Aside from the side effects of surgery, the dogs don’t experience harsh treatments, as they would with chemotherapy. This ITG research could lead to similar treatments in children where the activity destroys all the tumor cells without migrating and harming any other part of the body. Not only would this be better for the children’s health and recovery, but it would also lower the likelihood of recurrence. Because the formulation is injected directly into the tumor over its surface area, it is more likely to destroy all cancer cells, reducing the possibility of missing a mutant cell that could become cancerous. As with all treatment methods, however, some cancer cells may be missed, or the cancer could have been too far along to stop it from spreading. Also, if the cancer is in an area that is hard to reach with a needle, this method may be hard to implement. However, if the osteosarcoma occurs in a treatable area and is detected soon enough, this treatment could be a highly effective way to treat osteosarcoma in children. With more research and time, this method could be expanded to other types of cancer, creating a more efficient and less stressful way to treat patients.
References 1. 2.
Paoloni M, Davis S, Lana S, et al. Canine tumor cross-species genomics uncovers targets linked to osteosarcoma progression. BMC Genomics 2009;10:625 Lenox M. “Advanced Imaging for Surgeons.” Chapter 15, Small Animal Surgery. 4th ed. Philadelphia: Elsevier, 2011.
Different Perspective Images, a universally relatable medium, make photography an ideal way to express ideas. A photograph of a drop of water allows a photographer to capture her thoughts on perspective. By Eesha Farooqi
rom afar, there is only a drop of water on the leaf of a plant. Approaching closer, to the level of the leaf, one can see thousands of glittering particles floating around within, as if they had a world of their own inside the droplet. The image of nearby windows is reflected on the glass like surface, while a ray of light pierces through, emitting a blue hue onto the leaf. From afar, there is only a drop of water on the leaf of a plant. From a different perspective, there is a whole new world to be discovered. Although this snapshot may seem to be just a drop of water on a leaf, the story it tells goes beyond first glance. The art of photography uses the magic of perspective to change what is and what may be, constantly analyzing the world through a different point of view. For me, photography embodies the sort of perspective we as people should use in life itself, whether dealing with social ideas,
communication, or our own attitudes in general. My passion for experiencing different cultures and ideas influences my passion in photography a great deal, rather than one specific person influencing it. It’s my experience with people that got me started on photography, and since then, it has taught me to literally look at the world and the surroundings in a different way, and to see the beauty in even the most trivial of things. I have learned to open my eyes and expand my views, in photography and in society. If something doesn’t seem right one way, a new perspective may make it seem right the other way. The picture I captured of the water droplet on a leaf symbolizes this idea of perspective. From a normal point of view, all one sees is a drop. As one gets closer and looks at it from the level of the leaf, they begin to see things that may never have been noticed before, such as the little specks floating around in the drop of water. Photography offers the
audience a chance to look at objects, people, and scenes differently than one normally would, and it is this magic in capturing pictures that draws me towards it. Pictures are those things that any age group, any culture, any individual can somehow relate to, and because of that, I chose to use this topic and form of expression. It allows all people to be creative and illustrate their outlook, without worrying about being right or wrong. I also chose this topic because of how it has shaped my view on life. I have been able to look at the world differently, understanding other peoples’ ideas and cultures and also understanding my own ideas and cultures. I have learned to find meaning in seemingly meaningless things and to find and create stories for any snapshot. Photography is my passion and has taught me to see the world in a whole new light and from a different perspective.
Explorations | Fall 2011 17
Eesha Farooqi 18
Explorations | Fall 2011 19
National Economic Development and Conflict Behavior: Lessons for Syria The information revolution has changed the nature of the global economy. The relationship between economic development and conflict in the Middle East and Syria can be analyzed using data from various countries. By Anas Al Bastami
conomic development has always been an issue that attracts not only scholars and researchers but also people who are curious about what will happen to their countries. More important, economic distribution has affected many people’s lives, especially those living in countries under autocratic regimes, where negotiations with the government and potential for improvement are limited. Such countries often implement a status quo distribution of resources1 and the elite have most of the wealth. One country of interest to many political science scholars is Syria. Syria has an important geopolitical position, located in Western Asia between three vital countries that are also the subject of conflict behavior studies, namely, Lebanon, Iraq, and Palestine. Syria’s economy has changed since it achieved independence in 1946. It had a well-developed agricultural and industrial base in the beginning, but when the Syrian Baath party took control, Syria’s economic structure underwent substantial changes. The government adopted a socialist regime, and industries became nationalized. In the 1970s, Syria’s economic base shifted from an agrarian base to one dominated by services, industrial, and commercial sectors. In the 1980s, a group of crises caused the economy to change again, and few reform efforts were made. Moreover, Syria spent more on massive defense and security than on reforms, hindering economic improvements.2 However, Syria’s economy does not depend utterly on oil but is diversified, unlike those of some other countries in the Gulf region.
Many political science theories are devoted to conflict studies and behavior, because they are major factors in maintaining a country’s international relations and foreign policies. The war in Iraq and Israel’s war with Hezbollah, for instance, were both subjects of research on conflict behavior. Existing Literature
Many studies have attempted to clarify and explain domestic and international relations in various countries. More specifically, studies on Middle Eastern conflict behavior, such as that by Wilkenfeld, Lussier, and Tahtinen, have been prominent since the 1970s.3 Usually, people try to explain the interaction between two countries by using studies from conflict behavior. However, people have recently become interested not only in explaining these interactions but also in connecting the factors that affect the politics of a given country to its conflict behavior.4,5 One such factor is a country’s economy. In fact, a country’s economy may affect its foreign policy, even to the extent of preventing or causing war. According to previous studies on ethnic conflicts, such as one by Collier5 a country’s unequal economic distribution over various ethnic groups can lead to ethnic conflict and potentially to civil war. Actually, ethnic groups are major players in many political conflicts, especially in countries such as Lebanon, and many scholars have investigated how ethnic groups affect political dissent. Moreover, according to Lichbach, economic inequality was the focal point of studies of the Iranian Revolution, the Rhodesian Revolution, and La Violencia in Colombia.1 Lichbach also
contends that the distribution of income and wealth are significant economic explanations of political dissent and are major factors in the Economic Inequality-Political Conflict puzzle.1 Researchers have analyzed some crucial countries in the Middle East, especially Iraq, Lebanon, and Syria. Cordesman and Arleigh6 clearly indicate the importance of economics to Iraq’s stability and its political accommodation after the war. Economics is, according to them, also necessary for creating a successful partnership between Iraq and the United States. Moreover, private-sector businesses were under severe constraints, and governments applied many authoritarian policies to pricing and allocation. These policies distorted all economic motivation and outcomes; the Iraqi government clearly needed privatization. Many scholars argue that these administrative and economic inadequacies are characteristic of the structural and historical features of the Iraqi situation.7 And Irani asserts that in Lebanon, for instance, wealthy businessmen, bankers, and engineers exert great influence on the economy—and therefore on Lebanon’s conflict behavior.8 An example is the Hariri family, who are mostly businessmen and have direct control over Lebanese politics. In Syria, economic reforms were (and are still) necessary for political conditions to improve. After Syria’s former president Hafez Al Asad died, his son, Bashar Al Asad—with the approval of various alliances and the European Union—set economic reform as his primary goal.9 His choice indicates the economy’s significance in influencing Syria’s conflict behavior. Therefore, my study objective is to observe economic and
Explorations | Fall 2011 21
conflict data for some prominent countries in the Middle East and to obtain a qualitative relationship between economic development and conflict. Theory and Case Selection
This article follows the research strategy of Eisenhardt,10 who explained how to synthesize a theory from various case studies. I will consider Iraq’s relationship with Iran, Kuwait, and the United States, using these relationships to illustrate my hypothesis. In light of the case studies to be analyzed, I posit the following: Abnormal economic development and underdevelopment make war inevitable. Previous research has already proven that when the economy is sharply underdeveloped, the chance of occurrence of war increases. Before attempting to explain the various aspects of my hypothesis, I will define some of the essential concepts, which political science classes often use in describing international relations, to clarify how my hypothesis works.
Status Quo Revisionists Lions Wolves Lambs
Table 1: Country types and animal analogies
Revisionist countries, unlike countries that prefer the status quo, are those that are never satisfied with the current world order; they always demand change. Lions, as Table 1 indicates, are those countries that resist change but have stronger military powers. We can think of lambs as supporters of lions, in that both are satisfied with the current world order. Similarly, jackals support the wolves, and both demand change. My model predicts that economic development in a country that prefers the status quo will result in war with a revisionist country. Moreover, economic development in a revisionist country will lead to attack by a status quo country. I will now analyze Iraq in the context of two crucial wars, namely, the Iraq–Iran war of 1980–1988 and the Gulf War of 1990–1991. An understanding of Iraq’s economic booms in the late twentieth century is necessary. Iraq’s first economic boost occurred in the late 1970s. According to my hypothesis, Iraq could not avoid war during that period. Because Iraq was not satisfied with the changes going on in Iran, and for other reasons beyond the scope of this article, Iraq is a lamb, and Iran is a jackal. Iraq, the lamb, attacked Iran, the jackal, in September 1980, and the war lasted for 8 years. The United States, a powerful country that prefers the status quo, is a lion. Iraq indeed had the support of the United States during the war with Iran. This outcome is consistent with the predictions from my model. The boost in Iraq’s economy indeed led to Iraq’s attacking Iran few years later. The other Iraqi economic boom oc-
curred around 1989. According to my hypothesis, we should expect war to follow. In 1990, Iraqi forces entered Kuwait territories, and the Gulf War began. The United States immediately sent troops to hinder the Iraqi invasion. By entering Kuwait, Iraq showed its dissatisfaction with the situation in the region, and therefore we can consider Iraq a jackal. The United States is, as before, a powerful country that demands the status quo, and therefore it is a lion. The jackal attacked Kuwait, and the lion prevented
Figure 1: Iraq’s GDP over the past 50 years (source: World Bank, World Development Indicators)
the invasion. Again, this outcome agrees with my model. The second economic development in Iraq indeed resulted in Iraq’s attacking Kuwait. This observed pattern of war after economic booms is interesting and was not, to my knowledge, present in the literature. One might argue that I applied my hypothesis only to counties in the Middle East; however, one can easily apply this model to any country. Consider pre–World War I Germany. The
GDP (Current U.S. dollars)
GDP (Current U.S. dollars) Figure 2: Syria’s GDP over the past 50 years (source: World Bank, World Development Indicators)
German economy prospered in the Wilhelmine era, which lasted from 1890 to 1914.11 In 1914, Germany entered World War I. I am not saying here that Germany went to war because of the development in its economy; instead, I merely relate the occurrence of war with the economic development. The occurrence of World War I is, with respect to Germany, consistent with the hypothesis that I posed. One might also argue that when U.S. forces entered Iraq in 2003, no economic devel-
opment was taking place in Iraq. In fact, Iraq’s gross domestic product (GDP) was decreasing at that time. Does this contradict my proposed model? Looking carefully at Iraq’s GDP change 3 years before the U.S. invasion, one observes that a slight boom occurred. However, war did not occur immediately. Just after the boom, the United States imposed sanctions—that is, political conflict had started. This political conflict lasted for 3 years and culminated in the U.S.–Iraq war. Again, the development of
Iraq’s economy made war inevitable, a finding consistent with my hypothesis. Data Sets and Variables
This section presents some of the graphs pertinent to what I explained earlier. These graphs clarify how my model works. I gauged a country’s economic development with the GDP. Figure 1 shows the how Iraq’s GDP varied over time. Here, time is the independent variable and GDP is the dependent variable. Our main concerns are the late 1970s, 1989, and 2000: the peaks of the GDP graph. From Figure 1, one can immediately observe that the three wars occurred just after the peaks in Iraq’s GDP. This finding emphasizes the strong relationship between economy and war. The next section details my investigation of the various policy options for Syria in the context of my hypothesis. Figure 2 shows how Syria’s GDP has changed over the past 50 years. The previous illustrations lead us to conclude that the probability of occurrence of war is proportional to the steepness of GDP change: “Probability of war occurring”=k∙(d(GDP))/dt Discussion and Results
What could account for this nexus between economic development and conflict? Why would a boom in a country’s economy result in war? Several possible reasons exist. Under realism theory, for example, countries are selfish and tend to fight by nature. First, when a country develops economically, other countries might want some of that country’s resources. Many scholars, for instance, have attributed the U.S. invasion of Iraq to greediness for Iraq’s oil and natural resources rather than to the claimed objective of getting rid of weapons of mass destruction or saving the Iraqi people from the authoritarian government. Second, sometimes the more powerful countries (lions and wolves) perceive any development in other countries as a threat. Such was the case in the Cold War. Third, a country undergoing economic development might be tempted to invade other countries to expand its current territories and enhance development. Again, greediness, one of realism’s tenets, drives such actions. It would be beneficial if my model could prevent conflict between countries. My analysis shows observable trends in economic development and war. One can even use such data to anticipate war from an economic boost, and thus one could look for some alternative policy options to prevent war, without adversely affecting much of the country’s economy. Such an exercise could be a lesson not only for countries such as Iraq and Germany, which had already experienced the effects of economic development, but also
for countries with still-developing economies, such as Syria. As mentioned, when Bashar Al Asad became president, economic reform was one of his main objectives.9 Some scholars, such as Gifford,12 argue that Al Asad failed in accomplishing this. I will analyze such criticisms in light of my hypothesis and will explore some policy options for Syria. The advent of the information revolution has increased the opportunities and incentives for Syrians. Although perhaps President Bashar Al Asad could not accomplish everything, his policies caused Syria’s GDP to increase by more than US$30 billion in less than 10 years: a marked improvement (Figure 2). According to my model, this rapid increase of GDP and the possibility of an economic boom will lead to war. In fact, we already see some political conflicts arising between Syria and the United States, and the United States has imposed sanctions. A policy option that Syria might consider to prevent war is to reduce its rate of economic growth. Doing so could prevent the aforementioned quick boom, and hence no peak in the GDP like the one observed in Iraq will occur. Syria might aim for a gradual increase in its domestic economy rather than making large investments in risky projects, even if such investments would affect the economy positively. Actually, this positive effect would be only short-lived, and in the long run, making only small investments that guarantee a slight improvement to the economy would be better. Conclusion
According to my hypothesis, abnormal economic growth or underdevelopment makes war inevitable. This analysis investigated Iraq’s vital relationships with three other countries: Iran, Kuwait, and the United States. In all three cases, whenever the country’s economic development peaked, war followed. Then, because this pattern occurred consistently, I generalized this result to all countries. Next, I explained how countries can use this idea to predict outcomes and prevent wars. Applied to Syria’s current situation, my model predicts that Syria might be heading into war. The main policy option to consider is reduction of Syria’s economic growth, which could stabilize the economy and avoid war.
Lichbach M. An evaluation of “Does Economic Inequality Breed Political Conflict?” Studies. World Politics 1989;41(4):431–470. 2. Country profile: Syria. Available from http://memory.loc.gov/frd/cs/profiles/ Syria.pdf. April 2005. 3. Wilkenfeld J, Lussier V, Tahtinen D. Conflict interactions in the Middle East, 1949–1967. Journal of Conflict Resolution 1972;16(2):135–154. doi:10.1177/002200277201600202. 4. Abeyratne S. Economic development and political conflict: Comparative study of Sri Lanka and Malaysia. South Asia Economic Journal 2008;9(2):393–417. doi:10.1177/139156140800900207. 5. Collier P. Implications of ethnic diversity. Economic Policy 2001;16(32):127–166. doi:10.1111/1468-0327.00072 6. Cordesman AH, Arleigh A. Economic challenges in post-conflict Iraq. Available from http://csis.org/files/publication/100317_IraqEconomicFactors.pdf. March 17, 2010. 7. Mahdi K. Neoliberalism, conflict and an oil economy: the case of Iraq. Arab Studies Quarterly 2007;29(1):1–20. 8. Irani GE. Islamic mediation techniques for Middle East conflicts. Middle East Review of International Affairs 1999;3(2):1–17. 9. Schmidt S. The missed opportunity for economic reform in Syria. Mediterranean Politics 2006;11(1):91–97. 10. Eisenhardt KM. Building theories from case study research. Academy of Management Review 1989;14(4):532–550. 11. Germany—Economy—History. Available from http://countrystudies.us/germany/135.htm. 12. Gifford LA. Syria: The change that never came. Current History 2009;108(722):417–423.
Explorations | Fall 2011 23
Can Female Swordtail Fish Smell a Fight? Previous research has established that female swordtail fish can identify same-species males by smell as well as sight. New data suggests that females may also be able to identify certain male behaviors by smell as well – and these behaviors may affect the female’s response to courtship. By John Wilson
cross species, many males emit chemicals to help attract the opposite sex and to signal members of the same sex. Male mice secrete major urinary proteins, which help female mice identify individual males, and some major urinary proteins provoke male–male aggression.1 Female swordtail fish identify males of the same species not only by their looks but also by their smell.2 Females use these chemical cues,
Figure 1: A male Xiphophorus malinche fish
also called pheromones, to help assess nutritional state, age, and other qualities of a male, making pheromones also important during female mate choice. In a previous study, male swordtails released the courtship pheromone depending on the presence of females.3 Before this study, it was believed that male cue water (water in which a group of males emit their pheromones) is always attractive to females in the same population, with one exception.4 In this study, we tested whether female swordtails
could smell male–male aggression without actually seeing the aggression. Another aim for the study was to look at the correlation between the degree of male–male aggression and courtship to the attractiveness of the cue. Experimental Methods
In the cue-making process, we placed two tanks side by side, each containing 8 L of salted water. One tank contained four male swordtail fish (Xiphophorus malinche; Figure 1), and the other tank contained four female fish. We left the fish alone in a room and recorded their movements with a video camera for 3 hours. In the behavior trial, we placed nine different female swordtails in a tank individually. On one side of the tank, we added plain control water dropwise at the rate of 5 mL/min, and on the other side, we infused conditioned male water in the same manner. We then measured, in seconds, how long each female fish spent with the cue versus in plain water. We averaged the association times across all the females used for that trial. We calculated the preference index by subtracting the time spent with the control water from the time spent with the cue water. We counted the number of aggression and courtship behaviors for the cue-making session. We calculated the cue index by subtracting the number of aggression behaviors from the number of courtship behaviors. We correlated the cue index with the preference index by using a general linear regression model implemented in the R programming language.
Preference Index (s)
y = 1.1028 x + 104.92 R² = 0.8471
50 0 -50 -100 -150 -250
Cue Index (s)
(A) The positive correlation between the cue index and the preference index. These two variables are linearly correlated. (B) The relationship between the courtship index and preference index. This relationship is not strong, which shows that courtship alone does not influence the preference index. (C) The negative correlation between the preference index and aggression index. This relationship is also not as strong, which illustrates that the preference index does not solely depend on aggression. We also tried to correlate the courtship index and aggression index to the preference index. We used the two-tailed Student’s t-test to determine whether the coefficient in the linear regression model was significantly different from zero. Results
The cue index was positively correlated with the preference index (t = 0.00204; p < 0.05; Figure 2A). The p value is less than the cutoff p value of 0.05, indicating that the correlation is statistically significant. Our results also show that courtship or aggression alone does not have a significant correlation with female preference (Figure 2B and 2C). However, when combined, these two traits significantly correlate with female preference. Therefore, the attractiveness or repulsiveness of the pheromone depends on both aggression and courtship behaviors. Discussion
My research focused on whether male–male aggression affected the quality of the male pheromone. I found that as fighting between males increased, the pheromone became less attractive to the females. These results are significant because these tests prove that male–male aggression makes the chemical cue less attractive to females. These tests also prove that females can differentiate the two types of behavior (courtships
and aggressions) without actually seeing the behaviors taking place. Our results indicate that cues signaling aggression are independent from cues signaling courtships, and they are released in different social contexts. Only when combining the aggression index with the courtship index will one find a significant correlation with the preference index. A weak negative correlation exists between the aggression index and the preference index. We also found a weak positive correlation between the courtship index and the preference index. This means that both courtship and aggression are two separate chemical cues, rather than one homogeneous cue. Traditional behavioral trials use different chemical cues, assuming that both cues have attractive qualities.4 Therefore, such trials will interpret the association time as an indicator that the female will prefer one side more than the other. However, if the cue on one side is repulsive and the cue on the other side is not active, then the same observation can be made. This finding could potentially be misleading because we are interested mostly in chemical cues with high species specificity. It is more likely that closely related species share these aggression cues. In behavioral biology, we are concerned mainly with species-specific chemical cues because they are more likely to be involved in reproductive isolation. Future work will focus on the water to determine what chemical components fish release to make the pheromone attractive or repulsive. Once the chemical components are identi-
fied, workers will then be able to compare the chemical components from the swordtail fish with those of other organisms that use chemical cues to see whether the pheromones are made up of the same components. Acknowledgments
I thank my advisors, Gil Rosenthal and Rongfeng Cui, for their support and mentorship throughout the experiment. National Science Foundation grant IOS0923825 supported this study. References 1.
2. 3. 4.
Chamero P, Marton TF, Logan DW, Flanagan K, Cruz JR, Saghatelian A, Cravatt BF, Stowers L. Identification of protein pheromones that promote aggressive behaviour. Nature 2007;450:899–902. McLennan DA, Ryan MJ. Interspecific recognition and discrimination based upon olfactory cues in northern swordtails. Evolution 1999;53:880–888. Rosenthal GG, Fitzsimmons JN, Woods KU, Gerlach G, Fisher HS. Tactical release of a sexually-selected pheromone in a swordtail fish. PLoS One 2011;6:e16994. Rosenthal GG, Ryan MJ. Conflicting preferences within females: sexual selection versus species recognition. Biology Letters 2011;7:525–527.
Explorations | Fall 2011 25
An exhibition of Mayan history inspired an artist to create a modern expression of Mayan culture as well as an illustration of personal beliefs. Details from Mayan culture, ranging from color choice to the prominent role of a jaguar in the painting, merge in the piece. By Blanca Tovar
t Fort Worth’s Kimbell Art Museum, I came across a Mayan exhibition called “Fiery Pool: The Maya and the Mythaic Sea.” Born and raised in Mexico, I have a strong interest in the pre-Hispanic cultures, including the Mayans. I was delighted to surround myself with not only art but also history. This exhibition inspired me to create this piece, which I have named Chilam Balam. Painting has been a form of expression through which I can communicate my ideas or emotions to the world. I chose painting as my medium for Chilam
Balam: through the conversation that my brushstroke creates, the blending of colors, and my energy directing it, I can place these expressions onto canvas for the world to see. Before viewing the exhibition, I researched Mayan mythology. Their personification of the forces of nature, their deities, and their beliefs about life and death really drew my interest. Seeing the exhibition inspired me to use all these ideals and beliefs—to allow them to influence the vision and style of my piece. The Mayans viewed the jaguar, for instance, as a powerful deity—a representation of power, strength, beauty, and nature. The
jaguar was the most feared and worshiped of all animals among the pre-Hispanic cultures: a bold hunter, a natural predator, an excellent climber and swimmer. This elusive animal travels the jungle freely and undetected, from the top of the trees to the lakes and swamps. Because of this feline’s powerful characteristics, the Mayans associated the jaguar with shamans, divine individuals with supernatural healing powers. The transformation of a shaman into a deity such as the jaguar is common in Mayan mythology. Through this supernatural transformation, the shaman acquires the jaguar’s abilities. This metamorphosis inspired the name of Chilam Balam for my piece, which means “jaguar priest” in the yucatec Maya language. I incorporated into my painting the deity Itzamna, which is the last deity at the bottom-right corner. Itzamna was the god of the sky; Mayan mythology often represented day and night as an elderly man who was also believed to be the first priest as well as the god and creator of their civilization. The Mayans believed in a sacred fire that revolved around the equilibrium between Mother Earth, humans, and animals; this belief influenced my choice of a fiery red as the background color. The color red itself represented the sun, blood, strength, and fire, all important to Mayan culture. And finally water was seen as the source of life, as well as an entrance to the underworld and a medium from which the world emerged, gods arose, and ancestors communicated. My piece, Chilam Balam, absorbs the Mayan spiritual realm, merging it with my personal beliefs. This piece speaks of the quest to find one’s inner strength, of being a shaman and transforming oneself into that fierce, nocturnal animal who can see in darkness and walk the jungles with a strong presence. Chilam Balam is about coming in contact with that inner jaguar and allowing it to come through.
Explorations | Fall 2011 27
Snow Leopards in Western Mongolia: A Study of Population Density Conservation genetics, used to preserve endangered and at risk species such as the snow leopard, counts the number of animals in a particular area. Knowing how many snow leopards live in specific regions of Western Mongolia helps conservationists decide if more stringent protection methods are needed. By Yvette Halley
now leopards, Panthera uncia, are widely distributed throughout Asia, with Western Mongolia being an important part of their range of distribution. Since 1972, the World Conservation Union (IUCN) has categorized snow leopards as an endangered species.1 However, the quantitative data on population distribution and abundance in Western Mongolia are currently unknown. A lack of data collected on their population size makes it hard to determine whether the snow leopard population is currently overpopulated, stable, or underpopulated. Knowing the distribution of snow leopards within Western Mongolia is important for several reasons, one of which is conservation. If a way to determine relative population numbers existed, it would be more apparent if certain conservation methods needed to be employed to preserve this species. We used noninvasive genetic sampling (NGS) to determine the density of the snow leopard population within specific regions of Western Mongolia. NGS is a useful tool because genetic samples are collected without disturbing or affecting the target individual or species. NGS is also a good strategy for this research because more samples can be collected more quickly than by trapping the animal and then extracting tissue or blood. Fast, easy sample collection is important because of Western
Mongoliaâ€™s environment and terrain. The ideal terrain for snow leopards is arid and barren: few roads exist; the weather can be considerably cold; and the terrain is steep and rugged in some areas, complicating vehicle navigation.1 Obtaining permits to take samples can be difficult at times, especially if the permit would allow the person collecting samples to come into direct contact with an animal (e.g., darting, tagging, blood samples, tissue samples). Using NGS is a more convenient way to obtain samples, especially because the technique requires little to no contact with the species being studied. Permits are more likely to be given if authorities in the region feel that the study does not pose a risk to the animal (even though direct contact is not harmful). The sample of choice in this study is scat (fecal material). Scat can be collected and stored conveniently and quickly; it may later be shipped to a lab for analysis. This is an ideal condition because little or no disturbance of the felines occurs and work can be done with a relatively simple and prompt method, which is more desirable for a field researcher. Methods
This project contained a set of samples that were all collected in Mongolia by B. Munkhtsog (Irbis Mongolia and the Mongolian Academy of Sciences). The samples (n =
138) were gathered from three different areas: (1) Altan Khokhii mountain range, (2) Turgen mountain range, and (3) Tsagaan Shuvuut mountain range. The collection method used was NGS, which was carried out by collecting scat. The scat was carefully stored in tubes containing silica gel. The silica gel tubes were designed to collect any potential moisture buildup. If moisture were to build up in the tube, the scat samples would degrade and render the sample useless. To analyze the samples, we extracted DNA from the scat. Scat may contain cells from the outer epithelial tissue lining the colon. The DNA extraction usually takes 3 days per sample. Samples were analyzed in sets of 24. We used a Qiagen QIAmp DNA Stool Mini Kit to extract DNA from the fecal matter. DNA from stool is of lower quality than DNA from other sources such as blood or DNA collected through more conventional means. Contamination can be a source of error, so sterilizing all equipment with 20% bleach is vital, all extractions must take place away from areas where post-PCR products may be present, aerosol tips must be used, all positives (samples that are proven to be snow leopards) and negatives (samples that do not contain DNA) must be done with all extraction and monitored for possible signs of contamination. The positives used are scat samples that were confirmed to have come from a snow leopard, and the negative
samples are ones that are not from a snow leopard. Also, one must vortex and spin down all samples before opening any sample vial; doing so ensures that all samples are mixed !!!! ! ! !
Possible Contamation: B1-B3
R O W S
TS21 TS21 TS21 TS21 TS21 0 0 0 8 8
TS21 TS21 TS21 TS07 TS07 1 1 1 B B
TS21 TS21 TS21 TS08 TS08 2 2 2 B B
TS21 TS21 TS21 TS09 TS09 3 3 3 B B
TS21 TS21 TS21 TS01 TS01 4 4 4 0B 0B
TS21 TS21 TS21 TS01 TS01 5 5 5 1B 1B
TS21 TS21 TS21 TS01 TS01 6 6 6 2B 2B
TS21 TS21 TS21 TS01 TS01 B B B 3B 3B
Figure 1: PCR of scat-extracted DNA. Green highlights! indicate possible snow leopard Tsagaan samples. Scat
well and that any condensation or sample that may be present on the lid will be removed (this measure decreases splash risk and possible contamination). The Qiagen QIAmp DNA Stool Mini Kit uses a silica membrane–based purification system that contains up to 30 μg of '" genomic, bacterial, viral, and parasite DNA from TS21 8 fresh or frozen human stool or other sample TS07 types with high concenB trations of PCR inhibitors. Consisten The combined action of TS08 B InhibitEX, a specialized Data: E4-E5 adsorption resin, and an TS09 optimized buffer leads to B removal of PCR inhibiTS01 tors.2 After DNA extrac0B tion, we determined the concentration of DNA per TS01 sample by using Nano1B drop 1000 SpectrophoTS01 tometre (from Thermo 2B Scientific, Pittsburgh, PA). We later ran each sample TS01 on a gel to determine DNA 3B quality and quantity. The last step was to identify species and sex. First we ran a gel that contains carniAltan Turgen vore primers to determine which samples are
from carnivores. Mongolia is home to many different carnivores, including wolf, fox, and snow leopard. We then identified species by using the cytochrome b gene. Doing both steps is important because you may then cross-examine each sample. If a snow leopard sample does not show up as a carnivore sample, that would indicate contamination or possible sample degeneration. Species identification uses the cytochrome b gene to conclude which samples are in fact snow leopard. The cytochrome b gene is the most widely used gene for phylogenetic work and is conserved enough for population studies.3 It codes for the electron transport cytochrome, cytochrome b, which is the only cytochrome that mitochondrial DNA encodes. We used markers tested on the y chromosome, AMELy, to determine the concentration of males and females in each sampling area. We used microsatellites—repetitive sequences of DNA that function as genetic markers— to help identify distinct individuals. Our experiment used PCR (polymerase chain reaction), a scientific technique that makes many copies of DNA. We ran this gel to detect the cytochrome b gene, which determined whether the sample is from the snow leopard. We ran the samples three times
CONTINUE ON PAGE 37 >
%%2'0" Explorations | Fall 2011 29
Honors and Undergraduate Research
Honors and Undergraduate Research provides high-impact educational experiences and challenges motivated students in all academic disciplines to graduate from an enriched, demanding curriculum. The programs administered by the office bring together outstanding students and faculty to build a community of knowledge-producers, life-long learners, nationally-recognized scholars, and world citizens. Through Honors and Undergraduate Research, motivated students have access to honors courses, co-curricular enrichment activities, and research programs that can be customized to enhance each student’s personal, professional, and intellectual development. Honors and Undergraduate Research 114 Henderson Hall
HONORS AND UNDERGRADUATE RESEARCH hur.tamu.edu
Honors and Undergraduate Research challenges all motivated and high-achieving Texas A&M students to explore their world, expand their horizons and excel academically. While some services of the office are exclusive to Honors Students, advisors are available to talk with any student who is interested in sampling the academic challenge of an Honors course, committing to an undergraduate research project, applying to the Honors Fellows program, or engaging the process of selfdiscovery entailed in preparation for national fellowships such as the Rhodes, Goldwater, or Truman Scholarships. Honors and Undergraduate Research oversees the following programs and services: • • • • • • • • •
Honors and Undergraduate Research 4233 TAMU College Station, TX 77843-4233
Honors Student advising University Scholars Program University Studies – Honors degree Honors Housing Community National Fellowships advising Undergraduate Research Scholars Program Research Experience for Undergraduates Assistance Grant and Proposal Assistance Explorations student research journal
Honors and Undergraduate Research joins the university community in making Texas A&M a welcoming environment for all individuals. We are committed to helping our students understand the cultures that set us apart and appreciate the values that bring us together.
Tel. 979.845.1957 Fax. 979. 845.0300 http://hur.tamu.edu
Explorations | Fall 2011 31
CORONA: the Journey to the Worldâ€™s First Spy Satellite
merica’s first spy satellite program, project Corona, became a highly successful intelligence-gathering operation and established a new era of satellite technology. The technology that the Corona program pioneered affects everything in the modern era from how people watch television to national security. Recently declassified documents from the Eisenhower administration reveal the incredible story of how the CIA and Eisenhower administration revolutionized intelligence gathering after the embarrassing debacle of the 1960 U-2 incident, in which a U.S. spy plane was shot down over Soviet territory. Volumes have already been written on the technological innovations of Corona, but analyzing these new documents illustrates how the U.S. government developed and maintained the secrecy around its most technologically advanced and sought-after projects. Corona inherited a unique model of interagency and interservice cooperation from the U-2 aircraft’s development and aerial reconnaissance programs. After the U-2 incident, project Corona was accelerated and extended to meet the high demand for image intelligence from the Soviet Union. Although it was originally envisioned as a temporary program until other systems could be developed, Corona was so successful and reliable that it continued to operate for 12 years after it began. Corona began America’s satellite reconnaissance network and established a model for development and operation that future programs would rely on.
Development of Project Corona
Project Corona was America’s first successful spy satellite program. It helped U.S. intelligence agencies understand the USSR, allowed for better decision making in times of crisis, and established a new era of satellite technology that affects everything in the modern era. By David Glasheen
When work on project Corona began in 1958, the idea of space-based photo reconnaissance was not new. As early as 1946, the RAND Corporation had done conceptual work and assessed the feasibility of satellite reconnaissance for the U.S. Air Force.1 The Air Force and Navy were both trying to develop their own satellite programs, each motivated largely by interservice rivalry.1 Overall, the military programs were running late, exceeding their budget, and not meeting performance requirements. On April 15, 1958, the original outline of the Corona project was submitted for approval. It called for a total of 12 flights during 1959 at a cost of slightly more than $30 million,2 $5 million less than was requested to produce 30 U-2 aircraft in 1954.3 The Advanced Research Projects Agency (ARPA), the Department of Defense research division for the second-stage vehicles, was to provide $24 million, and the CIA would provide $7 million for the payloads. Corona relied on a division of labor and resources: the Department of Defense carried the large cost of procuring and
launching the satellites, whereas the CIA developed the classified payload and managed the secrecy around the project. The CIA could not finance such a large sum of money without drawing attention3; however, hiding a few million dollars’ worth of rockets in the already established defense procurement and production infrastructure was easy. Eisenhower had overseen the development of technologically advanced projects that both the military services and the CIA had run. After comparing the Air Force’s progress with their satellite program to the success the CIA was enjoying with the U-2 program, Eisenhower gave the Corona project to the civilian CIA. With the U-2 program, Eisenhower liked how the CIA had quickly moved from development to operation and how effectively all the aspects of operating the secret reconnaissance program were managed. By design, Corona was to use the same model of cooperation and development to replicate the operational success of the U-2 program. Eisenhower would not be disappointed with this choice; within a year and a half of approval, Corona was producing intelligence.1
Explorations | Fall 2011 33
Corona Cutback By August 1958, it was clear that the initial cost estimate was going to be incomplete. First, no budget existed for the first-stage Thor boosters that would put the satellite into space; the project planners had assumed the Air Force would provide them without cost.2 With the history of cooperation between the CIA and the Department of Defense—and especially the close relationship between the CIA and Air Force when working on the U-2 project—this assumption was reasonable. However, the initial estimate also did not include money either for the four engineering flights that would be crucial to diagnosing technical problems with the Corona systems or for the three biomedical flights that provided cover for the Corona project under the guise of the Discovery program.2 (Some second-
ary sources refer to the program as “Discoverer,” whereas the CIA’s internal documents refer to it as “Discovery.” Every indication is that both terms refer to the same program. For clarity, I will use “Discovery.”) The new estimate came out to $49 million for the Corona system itself, not including the engineering or biomedical launches. Despite the personal attention Corona received from the Eisenhower administration during the planning stages, Corona was envisioned as a temporary measure to cover the gaps while more sophisticated programs developed by the Air Force were in the works. During the budget battles between the military and the CIA, Corona received no direct support from the Eisenhower administration and faced a severe reduction in Department of Defense support. The national security complex’s R&D efforts
Four Days The shaded areas show the area a single satellite could cover in four days.
naissance programs. Gathering intelligence was still of the utmost priority at this time, but neither the Department of Defense nor the CIA had unlimited funds to pursue new satellite technology for this purpose. With expensive defense projects, support came in the form of money, and Corona was not getting much. Temporary Extension of Corona
Despite earlier setbacks, by March 1959, Corona’s prospects were beginning to look up. A memo from Richard Bissell, the CIA’s deputy director for clandestine operations, to General Goodpaster proposed extending the Corona program for a variety of reasons. First, the intelligence officials anticipated a greater need for image intelligence from the Soviet Union in both 1959 and 1960. Because of cloud cover
and other weather features, the number of reconnaissance flights would need to be increased to get even moderate coverage of high-priority targets, such as ICBM sites that were under construction.2 Unfortunately, the more sophisticated Sentry program would not be operational or able to meet these needs by 1960, the expected end of Corona’s operations.2 To meet the growing need for aerial intelligence, Bissell proposed restoring Project Corona’s previously reduced four flights in 1959 while adding eight additional flights for 1960, for a total of 20 Corona missions. Twelve of the flights would be conducted in 1959 and eight were scheduled for 1960. This reduction in the number of flights from 1959 to 1960 was based on an anticipated improved reliability for all of Corona’s systems that had negatively affected some missions, such as rocket or camera failure,
continued to focus on technology to develop and improve intercontinental ballistic missiles (ICBMs), and the same technology required to launch an ICBM was necessary to put satellites into orbit. By late autumn of 1958, Corona’s production was scheduled to proceed as planned, with the understanding that the more sophisticated Sentry program developed by the Air Force would be able to meet intelligence gathering needs by 1960.2 We can fairly say that no one in the CIA, Department of Defense, or Eisenhower administration expected Corona to flourish and become a foundational intelligence gathering tool. Without the radical change in priorities caused by the U-2 incident, Corona would have faded away like the temporary measure it was designed to be. Before the U-2 incident, cost was the main concern with regard to satellite recon-
Corona Launch but not on an expectation of decreased demand for intelligence.2 To enable the increase of Corona flights, its cover program, Discovery, would need to be augmented to facilitate the development of Corona’s technical efficiency and secretive cover story. The Discovery program would be expanded to include four total engineering flights to diagnose system problems and one biomedical flight to maintain the cover of scientific development.2 Maintaining security around Corona and preventing discovery of the true nature of the missions was top priority for the Eisenhower administration and the CIA. Bissell conceded that no one could stop the technical press or Communist governments from speculating that the true purpose of the Discovery program was intelligence gathering; however, the CIA worked extensively with its partners in the Corona proj-
ect to develop cover mission descriptions for each Discovery flight. Specific mission objectives were tailored to each flight, and certain amounts of flight information were made public or were given a low classification to give the impression that the missions were not concealing anything. Without specific measures to counter speculation, each missile launch would trigger more speculation, slowly eroding the legitimacy of Discovery’s cover story. The CIA demonstrated a remarkable ability to conceal the nature of the Discovery program despite a great deal of public speculation. A great advantage of Corona over other projects was that it could be financed within the existing Air Force and Department of Defense budgets.4 The increase in flights would cost a great deal of money. All of the additional $67.1 million needed to fund Corona would
be transferred to ARPA, which could do the procurement through the Department of Defense’s Ballistic Missile Division. From there, that division would also be responsible for the operational phase of the Corona mission, which included launching, tracking, and recovering the satellites. Following this pattern, the CIA would be able to conceal its involvement with the project by not spending unusually large amounts of money. Also, Corona could use the preexisting missile launching and tracking facilities that the Department of Defense had developed for their ICBM and spaceflight programs. This measure would substantially reduce the cost of developing new facilities or new methods to track the Corona satellites. Working through the Department of Defense’s Ballistic Missile Division also helped
to maintain the scientific cover story for the Discovery program. The cover story for Discovery could claim that the launch facilities were too limited and forced the Discovery flights to be launched from Vandenberg Air Force Base.1 Vandenberg’s geography in turn helped to corroborate the necessity of launching the satellites into a low Earth polar orbit, which would send the satellites over the Soviet Union, rather than other orbits that avoided Soviet territory altogether.1 New Support for Corona
The national security requirements for intelligence on the Soviet Union were just as great after the U-2 incident, but now the United States had one fewer intelligence gathering tool. Although the development
of a successor for the U-2 aircraft continued as planned, the SR-71 Blackbird never flew over the Soviet Union for fear of discovery. By June 1960, the SR-71 was a low priority, and Eisenhower expressed interest in canceling the project altogether.5 If the next generation of high-performance aircraft would not be an acceptable replacement for the U-2, clearly no aircraft could fill that role. Instead, faced with limited means by which to gather intelligence and a troubled conscience after the U-2 incident, Eisenhower supported and extended the space-based Corona program to replace the U-2 aircraft and avoid many of the risks associated with aerial intelligence gathering. Eisenhower’s personal reaction to the events on May 1, 1960, provided the impetus for the move to space-based image intelligence systems. Motivated by Eisenhower’s demands for an alter-
native to aerial overflights, the intelligence community worked to find a feasible and legal alternative to gather the much-needed image intelligence that the U-2 missions previously yielded. That alternative was project Corona. Like any new technology, Corona faced many technological challenges and setbacks, but unlike previous projects, Corona now had the funds and the support to solve the problems quickly and begin producing intelligence. Eisenhower’s continued support for Corona through the technical setbacks reveals his desperation to begin gathering imagery intelligence again without aerial overflights. Corona overcame many difficult “firsts” in the history of space flight, such as the first midair recovery of an object from space and the first photographs taken in space.6 These hurdles were not overcome easily; the first 13
launches failed outright. Not until August 1960 did useful photographs emerge and successful intelligence gathering begin in earnest. When Corona began to work, however, it was a windfall for the intelligence community. Corona quickly mapped the entire USSR, including all the significant military targets. The political ramifications of Corona’s successful performance were enormous. Looking back in 1967, President Johnson remarked, “We’ve spent thirty-five or forty billion dollars on the space program. And if nothing else had come if it except the knowledge we’ve gained from space photography, it would be worth ten times what the whole program cost. Because tonight we know how many missiles the enemy has and, it turned out, our guesses were way off. We were doing things we didn’t need to do. We were building
Explorations | Fall 2011 35
things we didn’t need to build. We were harboring fears we didn’t need to harbor.”1(p.7–8) With the information that Corona produced, intelligence estimates were no longer educated guesses; they were the product of counting the number of ICBMs or fighters on the ground. In total, Corona saved the United States billions of dollars by showing that we did not need more expensive weapons to maintain our nation’s own security. The intelligence from Corona missions also helped U.S. intelligence agencies understand the previously unknowable USSR. Satellite systems based on Corona’s breakthroughs went on to provide early warning about Soviet ICBM launches, enabling better decision making in a crisis. If the intelligence Corona provided helped avoid one decision to go to war out of fear that the Soviet Union would strike first, Corona was worth its cost many times over.
References 1. 2.
Charlston JA. What we officially know: fifteen years of satellite declassification. Quest 2010;17(3):7–19. Bissell RM. Memorandum for General Goodpaster re project CORONA. Central Intelligence Agency, Office of the Deputy Director for Plans, March 11, 1959. From the Eisenhower Presidential Library and Museum Digital Documents and Photographs Project. Available from http://www.eisenhower.archives.gov/Research/Digital_Documents/Aerial_Intelligence/1959_03_11. pdf. Goodpaster AJ. Memorandum authorizing special project (design and production of the U-2 airplane), November 24, 1954 [DDE’s Papers as President, Ann Whitman Diary Series, Box 3, ACW Diary November 1954 (1)] Available from http://www.eisenhower.archives.gov/Research/Digital_Documents/Aerial_Intelligence/1954_11_24.pdf.
Bissell, RM. Memorandum for General Goodpaster re project CORONA. Central Intelligence Agency, Office of the Deputy Director for Plans, March 11, 1959. From the Eisenhower Presidential Library and Museum Digital Documents and Photographs Project. Available from http://www.eisenhower.archives.gov/Research/Digital_Documents/Aerial_Intelligence/1959_03_11. pdf. Goodpaster AJ. Memorandum for the Record re successor aircraft for U-2, Office of the Secretary of Staff, June 2, 1960. From the Eisenhower Presidential Library and Museum Digital Documents and Photographs Project. Available from http:// eisenhower.archives.gov/Research/ Digital_Documents/U2Incident/6-2-60_MFR. pdf. National Reconnaissance Office. “Corona fact sheet.” Accessed December 12, 2010. Available from http://www.nro.gov/corona/facts.html.
The Pentagon This photograph of the Pentagon was taken in 1967 to calibrate the scale of the photographs, using the known dimensions of the Pentagon.
“Snow Leopards in Western mongolia: A Study of Population Density” con’t from page 29 !
Scrapes (Snow Leopard)
Number of Individuals
Number of Males
Number of Females
Scat from Snow Leopard Percent Snow Leopard (Scat)
Percent Scrapes (Snow Leopard)
Table 1: Data of Western Mongolia scat samples across a 96-well plate. When examining the gel, you will look for a highlighted band. Any highlighted bands indicate that the sample was from the snow leopard. When this occurs, you must note the number and location of bands, which could indicate sample contamination. If contamination occurs, the samples will have to be either reextracted or run on a new gel. Results
We extracted the DNA onto an agarose gel to determine DNA quality. Bands had to be present to determine whether the DNA is good quality. A strong band indicates a higher quality sample. We used a PCR amplification assay to determine whether the scat came from a carnivore. Figure 1 shows the result of a gel that was run with carnivore primers. It has been determined that scat samples AX001AX012, TU001-TU004, TU006-Ty010, and SO2 possibly belong to carnivores. The third set of extracted DNA determined which samples come from snow leopards. Because we ran the PCR three times, we needed three highlighted bands per sample. Only one or two bands could indicate contamination, and the sample will need to be run again to confirm whether it is snow leopard. Figure 1 shows the results from a PCR gel. For example, not all three replicates of sample TS211 (located in plate wells B1– B3) were highlighted. This could mean that part of the sample could have degraded while
in its silica gel tube, possibly because of a bad seal or exposure to moisture. The samples that were inconsistent in their results were run again to confirm the results. Samples like TS010B (E4-E6) are highlighted in triplicate and have consistent results. This is the result for a typical PCR gel that indicates a definite snow leopard sample. Finally, we analyzed sex identifications and individual identifications. We could see that all 138 samples came from only 16 individuals. Six of those individuals are present in Tsagaan Shuvuut and 10 are present in the Turgen mountain range. By sex, five males and one female were in the Tsagaan Shuvuut mountain range, and six males and four females were in the Turgen mountain range. Discussion
Since 1972, IUCN has categorized the snow leopard, Panthera uncia, as an endangered species. However, the quantitative data on population distribution and abundance in Western Mongolia are currently unknown. This study was aimed to determine the population densities of three different sampling areas of Western Mongolia: (1) Altan Khokhii mountain range, (2) Turgen mountain range, and (3) Tsagaan Shuvuut mountain range. We analyzed 138 samples collected from all three sites. Of these, we confirmed only 56 to be from a snow leopard. From this we determined that 16 individuals were spread over two sampling areas, the Turgen
mountain range and the Tsagaan Shuvuut mountain range. According to the samples we collected in this study, no snow leopards were present in the Altan Khokhii mountain range. Table 1 shows the data collected for this study. At sites where the scat was collected, many samples had “scrapes” around them, suggesting that a possible scrape around a sample could indicate a snow leopard scat sample. Such knowledge is important to make collecting samples easier and more efficient. We determined a male-to-female ratio of 11 to 5. This information can be used to identify which areas have the fewest snow leopards and will need to be placed in a higher conservation priority. Enhancing conservation priority involves looking at each site, comparing the number of males and females, and analyzing the population distribution. This information will be used to formulate conservation techniques. References 1.
Ishra C, Allen P, McCarthy T, Madhusudan MD, Bayarjargal A, Prins HH. The role of incentive programs in conserving the snow leopard. Conservation Biology 2003;17(6):1512–1520. Qiagen Sample and Assay Technologies. 2010 QIAamp® DNA Stool Handbook. 2nd ed. Kvist L. Phylogeny and Phylogeography of European Parids. Department of Biology, University of Oulu, Finland, 2000.
Explorations | Fall 2011 37
The relationship between Internet access and the First Amendment, a thorny issue, has become more pressing with time. To understand this complicated problem, we must understand factors like legal precedents, economic influences, and the government’s role in regulation. By Robert Scoggins
Net Neutrality in the21st Century
or almost two decades, the Internet has been revolutionizing how we work, learn, and live. But the decision on a single issue surrounding its regulation will determine to what extent—or even whether—one can access the Internet’s features in the future. Should the U.S. government enact legislation determining that Internet service providers (ISPs) can dictate different price levels for tiered levels of connection, all Americans face the prospect of limited connection speed and restricted accessibility to certain content depending on their ability to pay for the more expensive services. From the enactment of the First Amendment in 1791, through the first law on an information exchange medium in 1860, and continuing to the Federal Communication Commission’s rulings in the past few years, the government has continued to investigate how its citizens should connect to each other across distances. Nearing the end of my coursework as a communication major focusing in public policy, I have been seeking the next major issue to guide my interests and postgraduate work in the coming years. Because the Internet is arguably the most influential technology in society today, any change in its accessibility will affect the billions people who will use it in the future. Consequently, “net neutrality” is becoming one of the most prominent issues for any policy specialist or computer owner. The debate stems from the question of whether an ISP has the right to grant more connection speed capacity, or bandwidth, to people who can pay more for the extended access, and to restrict the services offered online to those who can afford a higher price. To understand the basis of any official ruling or law on Internet freedom, we must first examine the origins and progression of the U.S. government’s stance on communication.
Overview of Communication Regulation
Net neutrality, insofar as it pertains to information exchange, hinges on the interpretation of the First Amendment in regard to the Internet. The crux of the texts is, “Congress shall make no law . . . abridging the freedom of speech, or of the press.” This premise served as a federal guarantee to connection through verbal and printed exchange, because these were the only existing communication mechanisms upon passage of the Bill of Rights.1 The concept of speech expanded in 1860 when President James Buchanan commissioned the Pacific Telegraph Company of Nebraska and the California State Telegraph Company to construct an electric telegraph system between the East and West Coasts. As a result, concerns emerged over the concentration of control in such an important technological advance. Congress later passed the Pacific Telegraph Act of 1860 and determined that this new medium was necessary to advance telecommunication, was essential to transmit information over distances, and should be equally open to sale and reception to all constituencies. The bill stipulated that “messages received from any individual, company, or corporation, or from any telegraph lines connecting with this line at either of its termini, shall be impartially transmitted in the order of their reception, excepting that the dispatches of the government shall have priority.”2 The telegraphs were ruled to be “common carriers,” similar to public utilities, because their availability of use was both necessary and beneficial. The Communication Act of 1934 created the FCC, which would oversee the expansion of telecommunication and ensure that nothing would threaten the provision of exchange. With the invention of the Internet in the late 1980s and its rise in popularity as an academic and research mechanism, corporations that were providing dial-up service, or
connection through existing phone lines, to their own customers for strictly educational entities wanted to expand accessibility to the general public.3 These corporations lobbied the government for the right to extend dial-up access, which was then given in 1992 under the notion that “additional uses will tend to increase the overall capabilities of the networks to support such research and education activities.”4 As a result of this addition to Title 42, Chapter 16 § 1862(g) of the U.S. Code, the ubiquity of use would extend to the original research institutions and to everyone who paid a connection fee. Although questions about the sustainability of largely unregulated access to the Web would begin to rise in the late 1990s, it was the March 14, 2002, ruling by the FCC that further propagated the issue by defining how the government would address cable modem access, which relies on television cables, in regard to the Internet. The FCC issued this ruling: Cable modem service is properly classified as an interstate information service and is therefore subject to FCC jurisdiction. The FCC determined that cable modem service is not a “cable service” . . . [and] that cable modem service does not contain a separate “telecommunications service” offering and therefore is not subject to common carrier regulation.5
This ruling’s intention was not to subject cable users to price discrimination by their ISPs but rather to put those users in equal standing with those who would use the newest and fastest data transfer service, broadband. Although cable providers’ distribution of Internet services initially enjoyed the designation as a “common carrier,” and therefore were subject to regulation for equality in distribution and pricing, these services had come by way of the relationship to the phone services that they were also supplying. However, to level the playing field for those who were using broadband, the FCC removed this distinction so that neither would receive special federal designation. Impartiality and the Internet
No one had yet fully addressed the issue of deregulation. Tim Wu’s “Network Neutrality, Broadband Discrimination” was published in 2003 and would become a defining work regarding how the government should or should not regulate the Web.6 Inasmuch as the government could regulate neither cable nor broadband as telecommunication services, users of both media were subject to the stipulations of their providers. Wu’s publication suggested that broadband companies could now create a tiered level of access for their users and called the FCC’s ruling into question by challenging the system’s adherence to free market principles. He
Internet Freedom and S. 2360 Nondiscrimination Act of 2006
March 2, 2006
coined the term net neutrality and centered the issue on maintaining access to the Internet for everyone, regardless of ability to pay for faster speeds. The forms of legislation that Wu’s article called for would deny any artificial partiality to developers or ISPs, thereby “preserving a Darwinian competition among every conceivable use of the Internet so that the only the best survive.” Wu likened the issue to that of job discrimination based on various prejudices. His article argued that the Internet, like businesses, can operate efficiently only if all the best available operators and users could be employed. Thus, granting everyone equal access to the same information and at the same speed—to keep the market both egalitarian in its opportunities and productive in its developments—was in the best interests of both ISPs and the general public. Both Sides of Neutrality
Significant opposition to this concept has emerged from all sides of the political arena. Although major ISPs such as AT&T, Verizon, and Comcast have been the staunchest in their advocacy, organizations from the National Black Chamber of Commerce and the Tea Party movement have also rallied against net neutrality. The most commonly circulated quote from the detracting side comes from a ruling in Comcast Cablevision v. Broward County, in which a local court denied
Prohibits blocking or modification of Killed at session’s expiration. data in transit, with some limitations.
Communications Opportunity, Addresses Net Neutrality, but those Promotion and Enhancement H.R. 5252 March 30, 2006 facets were removed by amendment Killed at session’s expiration. Act of 2006 in the final copy. 110th Congress
Network Neutrality Act of 2006 Communications, Consumer’s Choice, and Broadband Deployment Act of 2006
H.R. 5273 April 3, 2006
May 1, 2006
Amends H.R. 5252 to make existing neutrality provisions stricter.
Defeated in committee.
Allows the FCC study abusive business practices recommended by Defeated in committee. “Save the Internet” coalition.
Internet Freedom and Prevents broadband providers from Killed at session’s expiration. H.R. 5417 May 18, 2006 Nondiscrimination Act of 2006 discriminating on content access. Killed at session’s expiration (Renewed in the House as H.R. S. 215 3458, where it expired) Currently in House Committee 112th States that the FCC does not have Internet Freedom Act H.R.96 ########### on Energy and Commerce (as of Congress the right to regulate Net Neutrality 3/11/11) Sourced by govtrack.us Figure 1: Legislators have proposed six pieces of legislation that were directed mainly at preventing discriminatory pricing or restricted access. However, despite their bipartisan sponsorship (five Democrat and four Republican authors), none of these bills has survived to see a final vote through both the Senate and the House. 111th Congress
Internet Freedom Preservation Act
Gives the FCC the authority to January 9, 2007 regulate Net Neutrality
Explorations | Fall 2011 39
a county ordinance that would force a cable company to give its competitors equal access to its communication equipment. In the ruling statement, the judge noted that net neutrality is like “forcing a printer to publish books, newspapers, periodicals, pamphlets, and leaflets on the government’s terms, and when it comes to government seizing command and control over freedom of the press, the First Amendment is anything but neutral.”7 Net neutrality proponents consist of both corporations, such as Google and Microsoft, and smaller political groups, such as Moveon.org and the Christian Coalition.8 Tim Berners-Lee, often lauded as the “father of the Internet,” has even stated in his blog, “yes, regulation to keep the Internet open is regulation. . . . But some basic values have to be preserved. . . . Democracy depends on freedom of speech. Freedom of connection, with any application, to any party, is the fundamental social basis of the Internet, and, now, the society based on it.”9 The Government’s Stance
As the conflict surrounding the issue escalated to a national level, a flurry of government activity pertaining to net neutrality arose, with both Congress and the FCC seeking to issue decisions. In 2005, FCC Chairman Michael Powell addressed net neutrality by positing that Internet users are entitled to four freedoms: content, applications, devices, and services.10 A change in leadership took place later that year. The new chairman, Kevin Martin, revised that position:
2. 3. 4.
Consumers are entitled to access the lawful Internet content of their choice. Consumers are entitled to run applications and services of their choice, subject to the needs of law enforcement. Consumers are entitled to connect their choice of legal devices that do not harm the network. Consumers are entitled to competition among network providers, application and service providers, and content providers.11
However, although this revision does not seem to mitigate the FCC’s desire to ensure neutrality, it does clarify how the commission views unlawful Internet practices. However, the lack of ambiguity serves as a means to more sternly assert the view on how consumers should be allowed to conduct themselves. In 2007, Comcast blocked certain users from sending large files because of the amount of bandwidth that these transfers would have consumed. The FCC stepped in and, in accordance with its capacity to ensure equal exchange, ruled that Comcast broke the law. However, Comcast sued and the case came before a U.S. Court of Appeals in the District of Columbia. On April 6, 2010, the court ruled that the FCC did not have the power to act as a network manager. In a controversial 3–2 vote on De-
cember 21, 2010, the FCC released a new set of rules for net neutrality. One notable tenet in the 194-page document explicitly mentioned the concerns surrounding a government restriction on broadband companies’ ability to give different levels of service according to how much the consumer pays. Somewhat surprisingly, the commissioners expressed that “We are, of course, always concerned about anticonsumer or anticompetitive practices, and we remain so here.”12 However, the report forbade the restriction of legal content by those companies, allowing consumers to continue accessing sites such as the online movie-ordering site Netflix. Although no court decisions have contested
“He coined the term net neutrality and centered the issue on maintaining access to the Internet...”
this yet, a Republican outcry met this report because it was the three Democrat members of the FCC who voted affirmatively, most notably in the form of a bill in the House of Representatives. Congress has seen its share of opinions on the matter over the years, as Figure 1 demonstrates. Legislators have proposed six pieces of legislation that were directed mainly at preventing discriminatory pricing or restricted access. However, despite their bipartisan sponsorship (five Democrat and four Republican authors), none of these bills has survived
to see a final vote through both the Senate and the House. The most current bill is also the only one to adopt a negative stance on the issue, coming 5 weeks after the commission’s late 2010 decision. The Internet Freedom Act, or H.R. 96, advocates that the FCC does not have the right to take a stance on the issue, because broadband access is not a telecommunicative medium. Introduced a month after the 112th Congress assumed control, it awaits vote in the House Subcommittee on Communications and Technology as of May 3, 2011, and has the potential to be the first legislative stance on net neutrality. Analysis
For now, the issue of net neutrality remains only half resolved. Broadband companies are explicitly granted the right to construct a tiered service in accordance with the FCC’s ruling in December, but that same document also prohibits them from limiting the connection capabilities of their users to the extent that certain sites are inaccessible. Even so, a new piece of legislation that would repeal the FCC’s power to decide such matters awaits vote the House of Representatives. Such a bill is still subject to interpretation by the Supreme Court, which could rule that Congress has no authority to take a stance on the restriction by virtue of the First Amendment. The various analyses of what constitutes “speech” and to what extent it is “free” have been the centermost causes for dispute surrounding the Internet today. Although net
neutrality was brought up 8 years ago, only one official government ruling has addressed the matter. All Americans enjoy the same content and access by the same speed now, but the contentious nature of the matter and unresolved interpretations suggest that neither Congress nor the net will remain neutral for much longer. References 1. 2.
Cornell University. First Amendment, Bill of Rights. Available from http://topics. law.cornell.edu/constitution/billofrights. Accessed 2 December 2010. Central Pacific Railroad. Pacific Telegraph Act of 1860. Available from http:// cprr.org/Museum/Pacific_Telegraph_ Act_1860.html. Accessed 15 November 2010. Internet Society. A brief history of the Internet. Available from http:// www.isoc.org/internet/history/brief. shtml#Commercialization. Accessed 10 December 2010. Cornell University. Functions. U.S. Code § 1862. Available from http://www.law. cornell.edu/uscode/42/1862(g).html. Accessed 9 December 2010. Federal Communications Commission. FCC classifies cable modem service as “Information Service.” Available from http://www.fcc.gov/Bureaus/Cable/ News_Releases/2002/nrcb0201.html. Accessed 12 December 2010. Wu T. Network neutrality, broadband
discrimination. Journal of Telecommunications and High Technology Law 2003;2:141. 7. Americans for Tax Reform. Does “net neutrality” violate the First Amendment? Available from http://www.atr.org/ net-neutrality-violate-first-amendmenta4189#. Accessed 10 December 2010. 8. ZDNet. Push for net neutrality mandate grows. Available from http://web.archive.org/web/20060615002336/news. zdnet.com/2100-9595_22-6051062. html. Accessed 12 December 2010. 9. Decentralized Information Group (DIG) Breadcrumbs. Net neutrality: this is serious. Available from http://dig.csail.mit. edu/breadcrumbs/node/144. Accessed 11 December 2010. 10. Federal Communications Commission. Four freedoms. Available from http:// hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-243556A1.pdf. Accessed 10 December 2010. 11. Federal Communications Commission. Four consumer freedoms. Available from http://hraunfoss.fcc.gov/edocs_public/ attachmatch/DOC-260435A1.pdf. Accessed 10 December 2010. 12. Federal Communications Commission. In the matter of preserving open Internet broadband industry practices. Available from http://www.fcc.gov/Daily_Releases/Daily_Business/2010/db1223/ FCC-10-201A1.pdf. Accessed 23 December 2010.
Explorations | Fall 2011 41
Man and Boat: A Voyage in the Art of Boatbuilding An interdisciplinary, hands-on approach to building the Banks dory yields a deeper, humanized understanding of maritime history. A Maritime Studies student learns about the relationship between man and boat as he builds his own Banks dory. By Brett Lindell
t Texas A&M Galveston, we are a school connected to the sea. I am a student of maritime studies. An interdisciplinary program, maritime studies looks at humanity and its relationship to the sea in terms of a vast array of fields, including history, law, anthropology, biology, and literature. Our goal is to study, discover, and preserve our connection to the bodies of water on which we live. My focus began as a crewmember aboard the 1877 barque Elissa. There I learned the ways and art of the sailor. I climbed hundreds of feet into the rigging, hauled lines, and scrubbed the decks. This experience inspired a passion in me to understand all things maritime. A fellow crewmember built a dory, and this set me on the path to build my own. But why a boat? Why this facet of maritime culture? To the maritime community, a boat is not just a boat. It is a deliverer of safe passage, a bringer of food and supplies, and a courier of messages good and bad. Maritime communities rely on boats to conduct daily life. To a certain maritime community, the Banks dory is all that and more. The Banks dory was the livelihood of fishermen in the Grand Banks off Nova Scotia; they depended on these boats to protect them in extreme conditions. Its history and reputation were why I chose this particular one to build. Among my influences were the people who surrounded me: my department professors, friends, family, and fellow crewmembers. Another influence was an old deckhand I spoke with who had rowed thousands of miles in dories. His tales inspired much curiosity and excitement. â€œMan and Boat: A Voyage in the Art of Boatbuildingâ€?
As the storm came over him and his crewmate, he had only one option: row. His boat, a Banks dory, named for the Grand Banks in which he fished, groaned as he hauled on the oars. He was a doryman out of Gloucester, Massachusetts, fishing for halibut thousands of miles off the coast of Nova Scotia. As the storm and winds grew stronger, he rowed harder but with less effect. He continued for more than a day but came to accept that the schooner that had dropped him off had presumed his demise and returned reluctantly to port. He and his crewmate faced a hopeless situation but nonetheless decided to row toward Canada, their only chance for survival. On the second day his crewmate gave up, curled into a ball at the bottom of the dory, and died. He was alone. He became frostbitten. As he rowed for his life, he discovered that his palms were devoid of flesh and the bones of his hands were making
a clinking noise as they rubbed against the oar handles. He purposely froze his hands to the oars because he knew that if he let go one more time, he might never be able to grasp them again. After 5 days, he and his dory made landfall. His journey was not over, but he eventually made his way home, a hero. He was a man of steel in a boat of wood. These two indestructible things, the Gloucester dorymen and the Banks dory, would come to typify the ruggedness required to survive this fishing lifestyle. The man in the preceding story was Howard Blackburn. He survived against all odds in a hopeless situation. Many stories about these rugged men and their tales of survival exist, all centering around the mythic reputation of the Banks dory. The Banks dory is a wooden boat that ranges from 12 to 20 feet long. It has a flat bottom, hardwood frames, and strong, flaring sides. It is recognizable by its tombstone-shaped transom. They could be used on rapid-filled rivers or on the open ocean. It
is said that if you find yourself caught in bad weather in a Banks dory, lie down in the bottom, because the boat knows what to do better than you. The thwarts (seats) could be removed and the dories stacked like Dixie cups. When the schooner would arrive in the fishing area, the dories would be unstacked, fitted with gear, and manned by two men. They would row or sail out of sight of the schooner, sometimes with a tether, usually without. The dorymen would fish for halibut or cod and then row or sail their way back to the ship. This was hard work, but the Banks dory grew a reputation unmatched by working-class boats of both past and present. I first learned of dories from a gentleman named Chet. This man wore linen pants held up by leather suspenders, a button-up shirt, and a captainâ€™s hat. He rolled his own cigarettes and had a gray beard that appeared to cover half his body. He seemed as though he had been plucked from a time a hundred years past. He told me stories about his home in
Massachusetts and the one time he rowed the entire East Coast. “you rowed a boat the entire East Coast?” I asked in disbelief. “Not a boat, a dory, and yes. I also rowed the Gulf Coast.” I stood there in shock: was this guy for real? I quickly plied him with questions. He answered every one. I credit him with my first true knowledge of dories. The next time I saw him, he gave me The Dory Book, by Jon Gardner. I read it and scanned the pictures carefully; eventually I gave it back to him. “Whenever you wanna build a dory, let me know,” he said. Me build a dory? I could never do that. “Okay,” I replied, “I’ll let you know.” “Here, read this book; this guy is my hero,” he said as he handed me a hardcover book carefully wrapped in plastic. It was Lone Voyager: The Extraordinary Adventures of Howard Blackburn, Hero Fisherman of Gloucester, by Joseph E. Garland. This was the story of Howard Blackburn, and it had a profound impact on me. I returned this book to Chet, but weeks later he moved somewhere back east. I had so many more questions, but I was now on my own. I decided (on a leap of faith) to build a Banks dory. This point is ultimately where my journey began. But how would I build a boat? I asked myself this question time and again. I talked with friends, professors, parents,
and my girlfriend. I read many books and tirelessly searched the Internet. I became so engrossed that I spent most of my day dreaming and building the dory in my head. I had reached a point of critical mass, and even though I didn’t have answers to every question, I knew I just had to start. I spent the first week lofting (drawing) the shape of the bottom onto 12-inchwide boards of white pine. I installed oak cleats to keep them in place and then cut the bottom shape out. This may sound easy— that’s exactly what I thought—but I learned along the way that things appear much easier on paper. I chose to use a handsaw—not for purist reasons, merely economical. It took hours, and I was mildly happy with the results. Through this process I routinely consulted people. One of my more interesting conversations was with Geno Mondello. He builds dories in the Banks dory’s ancestral home of Gloucester for a group called International Dories. They are a nonprofit organization devoted to preserving and promoting maritime history related to Gloucester, specifically dories and dorymen. I wrote them an e-mail and was referred to Mr. Mondello with a warning: “Tell him I said to call you, or else he will probably hang up on you.” Great. I dialed the number. “Hello!?” a gruff voice answered the phone. “Hi, my name is Brett Lindell. James said to call you. I have some questions about
Banks dories,” I responded hesitantly but confidently. “What are they?” Gee, so much for subtlety. “yes sir, I was wondering if you use a sealant between the laps of the strakes?” “yep.” “And do you use clenched nails for the laps as well?” “No. We switched to roves; we can’t find the nails anymore.” I panicked and forgot the rest. “And would you care if I called you in the next few months? you see, I’m building a Banks dory, and I—” “Sure.” “Oh, OK. Well thank you, Mr. Mondello, I really appreciate it—” “yep.” Click. And as quickly as it began, it was over, pheww. Thank you, Mr. Mondello. No matter how many books you read or Internet forums you browse, nothing can substitute for a conversation, no matter how short, with someone who has great experience. Next, I added the oak frames, but first I had to loft them. Lofting is the process of taking Cartesian coordinates from the table of offsets and enlarging them to full size. This gives you a full-sized pattern meant to decrease the margin of measurement error when creating a full-sized boat from a miniature-sized plan. I then found the angles, cut and glued the frames together, and attached
Explorations | Fall 2011 43
them to the bottom. Throughout the building, I found that venting my frustration in an empty and desolate garage gave me little relief—and venting to my girlfriend proved hazardous to my health. So, I decided to chronicle this adventure on a blog. This would give me a great outlet to tell people about my project and, more important, allow me to network with other boatbuilders. My frames were solid, but my boat more resembled a basket at this point. The hardest part of this stage was convincing people who came by to inspect (gawk) that this was indeed a boat. There were many skeptics. I soon added a stem and a transom. These pieces were hard to come by and very costly, so to help ease my budget, I laminated several planks of poplar together and carved them out. They too were attached to the bottom. I then faired the entire boat. Fairing is the process by which the parts of the boat are prepared to receive the planking. The surfaces are made flush and continuous by using planes, spoke shaves, files, and sandpaper—all traditional tools. I moved on to the planking. The planks were 12 inches wide, with the plank for the sheer strake being 8 inches wide. The planks for the garboard strakes were long enough, 14 feet, but for the remaining strakes, I would have to scarph two boards together. Scarphing is the process of joining two boards with lengthwise cuts so that no weak spots or changes in thickness are created. There are many ways to do it, but I created a router jig. It worked extremely well, and I used strong glue and clamps to attach the boards. I repeated this
process for many planks throughout construction. Planking is a tedious task that involves precision, brute strength, and endless patience. The plank is clamped to one of the frames in the middle and then slowly bent onto the next frame. It is then lined up with marks and clamped. As the plank nears the stem and stern, bending it becomes extremely hard because of the angle. A combination of clamps, people, and sweat are the only tried-and-true methods of success. Once the planks were nailed on, my dory began to resemble a boat. This turned the critics into believers, and I began to see the light. The boat was close to complete, but finishing was tedious and still took many hours. I needed to make gunwales, rub strakes, stringers, thwarts, and a breast hook; shape the transom and stem, fair the bottom, seal the laps, and drill holes for the painter and stern becket; and finally boat soup it. Each step could have volumes devoted to it. But “boat souping” deserves some mention. Boat souping is a traditional way of painting or treating wood. The process dates back to the Vikings. Boat soup is a combination of boiled linseed oil, turpentine, pine tar, and Japan drier. When mixed together it resembles crude oil and smells somewhat good and somewhat bad. Traditionally, boats would be painted in this (slathered rather) to protect against the elements. Workboats like the Banks dory would also receive a coat of paint on the outside to improve durability. The common scheme would be yellow on the hull, so that a dory could be distinguished from the
surface of the water, and dark green on the gunwales so it could be found in the fog. For now, I stuck with just boat soup. It was a smelly, messy affair. I let her dry and plotted her escape into the water. For many hours I pondered the proper way to launch her. Do I dare risk breaking a bottle of champagne on her bow? No way. I couldn’t do as the Greek or the Egyptians had and offer human sacrifice—clearly unacceptable by today’s standards. I researched historical launches and christenings and decided to do a little everything: a toast to Neptune, the four winds, a dash of libations, and most important, the presence of friends. Her launch was set for February 23, 2011. I named my dory the Tom Toby in honor of a Republic of Texas privateer. I launched her to the cheers and applause of a large gathering of friends and supporters. Nothing compared to the exhilaration I felt as she plowed through the water. On reflection, I thought about the relationship between man and boat throughout time. Sailors traditionally gave the term “she” to their boats because they relied on them for safe passage. As Howard Blackburn was rowing for his life, he relied on his dory to deliver him to safety, and that it did. For thousands of years seafarers relied on boats for their livelihood and safety. I too have relied on my dory, but for other reasons, and one day it may be for my safe passage. Although many differences exist between man of the past and man of today, one thing that will never change is the relationship of man and his boat.
Climate Change: Looking for Answers in Forest Soil In forests, the majority of carbon is stored not in the above-ground biomass, but in the soil. As the soil respires, carbon is slowly released. Factors like microbial respiration affect the rate at which carbon is released, and could potentially be manipulated to control carbon storage while maintaining forest health. By Justin Whisenant
he increase in atmospheric carbon dioxide (CO2) measured over the last century is generally accepted as the cause of rising global temperatures and will become an increasingly important environmental issue for the 21st century.1 With the trend of past fossil fuel use, atmospheric CO2 concentrations
should be even greater; however, terrestrial ecosystems, such as forests, rangelands, and arctic regions, absorbed significant amounts of past fossil fuel emissions.1 One approach to reducing the rate of future atmospheric CO2 increase might be to enhance terrestrial ecosystem carbon capture, but doing so requires a detailed understanding of how carbon cycles through ecosystems.
Figure 2: Fertilization occurred at day 12 and was followed by a spike and then a continuous drop in microbial respiration for fertilized samples.
Carbon, in the form of atmospheric CO2, represents the principal greenhouse gas constituent and a major point of concern for those seeking to mitigate climate change. Carbon on Earth is not static; it cycles between the atmosphere, plants and animals, the ocean, and the soil. Many factors complicate the movement of carbon between these systems. Some of these factors, such as fertilization and nitrogen deposition from air pollution, are caused by human activity, so understanding them can help direct decision makers with efforts to mitigate climate change. Forest ecosystems in particular sequester large amounts of carbon, so further understanding of carbon cycling in that setting would be beneficial. For many decision makers, forests are an obvious solution in carbon sequestration efforts. Most of the dry mass of trees comes from carbon, and forest managers already have many tools to accurately measure the aboveground mass of trees. Focusing on the visible and commercially studied portions of trees might cause people to overlook the belowground carbon stores in forests. In most forests, most stored carbon actually exists in the soil, not above ground. Globally, soils store twice as much carbon as aboveground biomass.2 Roots, limbs, and leaves all collect on the forest floor, with much of their stored carbon sequestered in the soil. Carbon returns to the atmosphere only as soil microbes
Explorations | Fall 2011 45
slowly decompose the organic matter. This metabolic process is similar to that of animals in that it results in CO2 release during respiration. In a sense, soil breathes, and the rate of this breathing strongly correlates to the rate that soil microbes decompose organic matter in the soil. Anything that slows the rate of microbial respiration would increase the amount of carbon stored in the soil; therefore, successful carbon sequestration efforts must consider soil carbon storage and how to manipulate it. Experiment Description The experiment we conducted used a customized respiration measurement setup (Figure 1) to measure microbial respiration in forest soils and how it is affected by additions of nitrogen and phosphorus. Nitrogen and phosphorus are the most common nutrients in fertilizer for managed forests. Microbial respiration reflects carbon cycling rates in soils, so understanding how forest management practices, such as fertilization, affect that rate is
important. The implications of this research are particularly important for carbon sequestration efforts, including carbon credits for forested land, which cannot be accurately measured until we fully understand the mechanisms controlling carbon movement in ecosystems. We hypothesized that adding nitrogen (particularly) and phosphorus to soil would decrease
in Florida. We then assessed how microbial respiration responded to various combinations of nitrogen and phosphorus in fertilizer. In the field, soil respiration includes both root and microbial respiration. Measuring microbial respiration from soil samples in a laboratory setting allowed us to single out the portion of total soil respiration attributable solely to soil microbes. Two months of measurements indicated that nitrogen and phosphorus, common components of fertilizer, suppress soil microbial respiration independently of root effects. Nitrogen’s suppressive effects were stronger than those of phosphorus. We infer from these results (Figure 2) that commonly used management strategies of forests can be a worthwhile contribution to mitigate climate change. It was already known that growing aboveground biomass of forests is a good way to capture carbon from the atmosphere and store it where it cannot contribute to greenhouse warming.
“Humanity is causing more widespread deposition of nitrogen in the world’s forests”
microbial respiration, as observed in many field measurements where less CO2 was released from soil at sites with high levels of fertilization. At Texas A&M University’s Forest Ecosystem Science Laboratory, we performed laboratory incubations on soil samples collected from a loblolly pine (Pinus taeda) forest
The results of this research, and of similar concurrent research, indicate that some management practices that increase tree carbon storage also increase soil carbon storage. Forested lands take on even greater significance when we understand their positive effects in combating climate change. Conclusion
Decreased microbial respiration, due to fertilization, suggests that managed, fertilized forests will store more carbon than unfertilized forests. Until recently, most studies of fertilized forests attributed decreased soil respiration to decreases in root growth alone, with the primary hypothesis that fertilized soils do not require the tree roots to extensively mine the soil because nutrients can be obtained more easily in fertilized soils. Fewer roots would mean that less carbon would be input to the soil, potentially reducing soil carbon storage.3 However, more recent research, including this project, has found that suppressed microbial respiration with fertilization can counteract the decrease in root growth. One implication of this research is that specialized practices may be developed that maximize both tree growth and the ability of forests to sequester carbon in the soil. Notably, while our nitrogen additions reflected common fertilization levels, humanity is causing more widespread deposition of nitrogen in the worldâ€™s forests. Many forests near urban areas receive nitrogen from industrial and transportation emissions. These additions are much more widespread than forest fertilization and could affect microbial
respiration similarly to the fertilization study reported here, although how much nitrogen is needed to suppress microbial respiration is unclear. We hope that increased understanding of the influence of intentional fertilization on soils can eventually be extended to those forest soils receiving added nitrogen from air pollution. Acknowledgment
I thank my research advisor, Jason Vogel, for supplying the materials and expertise that facilitated the success of this project. References 1.
Intergovernmental Panel on Climate Change (IPCC). IPCC, 2007: Climate Change 2007: Synthesis Report. Contribution of Working Groups I, II, and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, p. 104. Core Writing Team, R.K. Pachauri and A. Reisinger (eds.). Geneva, Switzerland: IPCC, 2007. Rapalee G., et al. Soil carbon stocks and their rates of accumulation in a boreal forest landscape. Global Biochemical Cycles 1998;12(4):687â€“701. Janssens I., et al. Reduction of forest soil respiration in response to nitrogen deposition. Nature Geoscience 2010;844:1â€“8.
Figure 1: Microbial respiration measurement station. Air is pumped by (1) a compressor through (2) a flow regulator and into (3) a CO2 scrubber. An air sample (4) from a soil jar is injected into (5) a sample loop, which sends a precise quantity of the sample into (6) a gas analyzer to determine how much CO2 was respired.
Explorations | Fall 2011 47
Preventing Tube Failure in a Nuclear Reactor Nuclear power has become an increasingly important energy source. To ensure safety, nuclear engineers must prepare for worst-case scenario accidents. The Systems Engineering Initiative at Texas A&M University analyzed probability of tube failure in a heat exchanger during a proposed worst-case accident. By Christopher M. Chance, Jordan Green, Alan E. Lee, Chris Pannier & Robert J. Seager, Jr.
uclear power—though emission free, efficient, and cost-effective—suffers the stigma of past accidents and public fear stemming from a lack of understanding. In the years to come, increasing pollution, resource scarcity, and power demand will increase nuclear power’s importance as an energy source. The Fukushima Daiichi accident in Japan, caused by one of the world’s largest combined earthquake and tsunami disasters, demonstrated the ever-present risk of failure of engineered systems. The accident shows the importance of anticipating and analyzing possible failure scenarios. Risk analysis is vital to safety in any industry, as the engineering decisions leading to the 2010 Deepwater Horizon oil spill showed. However, the need for analysis is higher for the nuclear industry because of the perceived consequences of a nuclear disaster. The nuclear power industry addresses risk by continual analysis of plant components, systems, and procedures. To ensure safety, engineers design for worstcase scenarios. As the U.S. Nuclear Regulatory Commission—this country’s nuclear industry regulating body—requires, these engineers predict how each component would react under extreme conditions, determining how to respond to an accident to minimize risk to public safety and welfare. Engineering analysis maximizes safety and reduces cost for nuclear energy. Overview
Studying one of these proposed worst-case accidents, the nuclear research group of the Systems Engineering Initiative at
Texas A&M University, including nuclear, mechanical, and civil engineering undergraduates, analyzed the probability of the failure of tubes in a heat exchanger at the South Texas Project Electric Generating Station (Figure 1). This hypothetical accident was considered in part of a chain of events that could lead to release of radioactive material or core meltdown. The goal of conducting this research was to give the engineers at the station a more detailed understanding of the tube strength under an accident condition that has never occurred in nuclear power plant operation. The station operates two pressurized water reactors that, at peak consumption, can each power more than 650,000 homes in Houston, Austin, San Antonio, and Corpus Christi. During normal operation of this type of reactor, pressurized water carries away heat from nuclear fission in the reactor core and flows through the reactor coolant system to the steam generator. There, the hot pressurized water transfers its heat to cooler unpressurized water enclosed in the secondary system. Owing to its lower pressure, the water in the secondary system boils and turns to steam, which spins the turbine and ultimately produces electricity. Once the pressurized water in the reactor coolant loop transfers its heat in the steam generator, the now cooler pressurized water returns to the core to be heated again and the cycle continues. Even when the plant is shut down for refueling and repairs, nuclear fission in the core continues to produce a small amount of decay heat. Heat exchangers remove this heat from the plant.1 Because the exchangers are not active during normal operation, they are separated from the reactor coolant system by two
isolation valves in series. These heat exchangers are not exposed to the high temperatures and pressures that occur during normal operations. This system is shown in Figure 2. The accident scenario involved exposing these heat exchangers to reactor coolant
Figure 1: The South Texas Project Electric Generating Station in Matagorda County, Texas (Source: Acosta A, “Texas Plant Watching Crisis,” The Eagle, March 17, 2011, http://www. theeagle.com/business/texas-plant-watchingcrisis)
during normal reactor operation. For the heat exchangers to be exposed to the hot pressurized reactor coolant, the two fail-safe isolation valves must fail in the open position. The probability of such an accident is believed to be so low that the reactor designers did not consider it in their specifications—much like the unexpectedly large earthquake and tsunami at Fukushima Daiichi, Japan. However, a utility may analyze such an accident to determine both potential risks to plant safety and possible solutions. This scenario describes a loss-of-coolant accident, one of the most recognized and severe that can occur in a nuclear power plant.2 The team’s research consists of analyzing these heat exchangers under this extreme condition. A typical heat exchanger consists of a hot fluid flowing through many small tubes surrounded by a cool fluid. Through heat transfer, the hot fluid gives its energy to the cool fluid. Important types of tube failures in heat exchangers include metal erosion, water hammer, vibration, and thermal fatigue. The heat exchanger that we analyzed contains an automobile-sized, bulletshaped outer shell surrounding a bundle of hundreds of long, narrow, U-shaped stainlesssteel tubes (Figure 3). When the isolation valves are open, the hot water from the core flows through these tubes and is cooled by
water surrounding them. Nuclear fission in the reactor core makes the water in the reactor coolant system radioactive. (This radioactivity is acceptable because the water and heat exchangers are confined within a containment structure.) If the tubes failed, the hot, radioactive water inside the tubes would mix with the surrounding cool, nonradioactive water and then breach the containment structure. Substantial loss of cooling water from containment could overheat the core, leading to damage and possible meltdown. Static Failure Analysis
The U-bend pressure tubes within the exchanger are made of high-strength carbon steel. Our initial analysis compared the design and material strengths of the tubing material to the stresses that our particular accident would cause, on the assumption that the tubes had no cracks or defects. To determine whether unflawed tubes would rupture at the accident pressure, we applied elastic failure theories. Doing so let us determine the amount of stress that would cause the tubes to fail as well as what stresses the hot, highpressure coolant would cause in the tubes. First, we plotted the types of stresses on the tube that the accident would create. In one set of calculations, we assumed
that the internal pressure alone affected the stresses; in the second set, we considered the combined effect of internal pressure and the temperature difference on the stresses. These plots and calculations showed us how the stresses varied along the tube and revealed that the tube material would withstand these stresses. Using these same theories and assumptions, we then calculated what stresses would cause the tubes to fail. From these results, we concluded that even the most conservative elastic failure law predicted a failure stress of more than twice what the accident would cause. We analyzed the bend in the U-tubes with a numerical approximation, reaching the similar conclusion that unflawed tubes would not fail under the accident conditions.3
Probabilistic Failure Mechanics
After determining that unflawed tubes would not fail, we investigated cracked or flawed tubes. This path of analysis was realistic because the tubes can have manufacturing defects or become flawed through use. Treating the tubes as standard thin-walled pressure vessels, we used the theories of mechanics involving the propagation of cracks to estimate the probability of tube failure under the accident conditions. This computer simulation offered a realistic assessment of the failure probability. The simulation randomized input variables, assuming a mean and standard deviation for each variable. We randomized variables such as crack depth and crack length and ran the simulation 1 million times. Because the input is randomized, achieving a realistic outcome takes many simulation runs. Each trial yielded a certain local stress that the accident conditions would apply to the flawed tube. We considered cases to be tube failures when this stress was greater than or equal to the maximum stress that the material could support. Knowing the number of simulations that demonstrated tube failure and the total number of simulations run, we calculated a failure probability. We made assumptions about crack lengths and depths because these data were not available from the nuclear plant. The simulation suggests that a flawed tube has less than a 1-in-1000 chance of failure, which is very low. To expand this probability of failure to multiple tubes, we used a statistical method that considers an equal probability of failure for each tube.4 We first considered 20 tubes and calculated the probability of exactly one tube failing, then two tubes failing, then three tubes, and so on. Our results indicated that the probability of no tube failure was Figure 2: Simplified diagram of reactor coolant system and heat exchanger relationship (Source: adapted from “Pressurized Water Reactor Systems,” USNRC Technical Training Center, http://www.nrc.gov/reading-rm/ basic-ref/teachers/04.pdf)
Explorations | Fall 2011 49
Figure 3: Heat exchanger containing U-tubes (Source: Heat Exchangers: Shell and Tube, Southwest Thermal Technology Inc., 2010, http://www.shell-tube.com/)
greatest for a total of 20 tubes. When we considered 3000 tubes, a value closer to the actual number of tubes in the exchanger, we calculated an approximate 70% chance that at least one tube would fail. Our mean number of failures indicated that one tube was most likely to fail with a total of 3000. Although this value may seem high, it is sensible because we considered all 3000 tubes to have defects. Leak Rate Analysis
These tubes are important primarily because, if breached, they allow reactor cooling water to leave containment. Therefore, how quickly water leaves the failed tubes is an important factor in determining whether enough water will be available to prevent a meltdown. We used a set of equations to relate the crack opening area and the pressure difference caused by the accident to the leak rate.5 For one tube failure, the leak rate was a few gallons per minute.4 We assumed a direct linear relationship between the leak rate and the number of failed tubes: that is, two failed tubes would produce twice the leak rate of one failed tube. Again, although gallons per minute may seem like a lot of water, the coolant flows at about 2,000 gallons per second during normal operation. Water leaking from the containment structure will enter a secondary loop of water outside the containment dome. Operators in the reactor control room will be immediately notified of a leak and will take the necessary actions to stop it before the core can be damaged. The leak rate we calculated proved a manageable one if the operators react quickly enough to prevent too much water from leaving containment.
Nuclear power systems are heavily regulated, especially in the United States, to ensure the public’s safety and well-being. To determine total risk, one must calculate each component’s reliability. For the heat exchanger tube failure, with the remote risk of core meltdown, the nuclear team studied the effects of a beyond-design-basis accident on the tubes. The results will give the station’s engineers a new component analysis. The analysis revealed that during the unlikely event of one of these accidents, unflawed tubes of the exchanger would not fail. To further our research, we considered that the tubes had a certain distribution of flaws and evaluated their probability of failure as well as the estimated leak rate from the flawed tubes, thus demonstrating the low probability of such an occurrence and supplying the station with a useful analysis.
This research was conducted as a project of the Systems Engineering Initiative at Texas A&M University. The authors would like to thank nuclear engineering undergraduates Christopher Chance, Alan Lee and Robert Seager for their significant contributions to the research, Dr. Cable Kurwitz and Matthew Solom of the nuclear engineering department for their advising and the Nuclear Power Institute, the Texas A&M Univeristy Nuclear Engineering Department and the South Texas Project Electric Generating Station for supporting the project.
South Texas Nuclear Project. Design Basis Document Residual Heat Removal System, Rev. 5. Wesley D. Interfacing Systems LOCA (ISLOCA) component pressure capacity methodology and typical plant results. Nuclear Engineering and Design 1993;142:209– 224. Lee S, Chang y, Choi J, Kim y. Failure probability assessment of wall-thinned nuclear pipes using probabilistic fracture mechanics. Nuclear Engineering and Design 2006;142:350–358. Hasan Z, King M, Green J, Lee A, Pannier C. Probabilistic failure analysis of a residual heat removal heat exchanger during a postulated loss-of-coolant accident. ANS PSA 2011 International Topical Meeting on Probabilistic Safety Assessment and Analysis. Wilmington, NC: March 13–17, 2011. Majumdar S, Kasza K, Park J, Bakhtiari S. Prediction of failure pressure and leak rate of stress corrosion cracks. 4th CNS International Steam Generator Conference. Toronto, Canada: May 5–8, 2002.
Preventing the Spread of Valley
occidioidomycosis, also called valley fever, is a disease that results from fungus agents Coccidioides immitis and C. posadasii. The infection occurs through inhalation of fungus residing in the soil or dust in or on the ground. Ground or soil disturbances release C. immitis into the air. Although valley fever is not transmittable from host to host, this disease has already become endemic (native) in arid regions in the Western Hemisphere such as West Texas, Arizona, Southern California, Mexico, Brazil, and Honduras.1 From 2000 to 2007, the incidence rate of coccidioidomycosis in California more than tripled.1 As the population grows in the southwestern United States, we can expect more and more people to become exposed to the harmful fungus and risk contracting valley fever.2 Information on treatment of the more serious version of valley fever is lacking, which is why finding out how to effectively prevent this infection from Birth rate based on entire population
Transmission rate of asymptomatic infection
The Valley Fever Model
Our model is an extension of the SIER model that we will call our SIIRE model. The acronym SIIRE stands for Susceptible, Infected without symptoms, Infected with symptoms, and REcovered. For simplicity, we
Asymptomatic Population at time t
Pathogens in the environment at time t
Susceptible population at time t Mortality rate
spreading is important.3 For the United States, valley fever is native only to the southwestern part of the country, but travelers visiting this area can spread the disease elsewhere. Burying deceased hosts that still contain the pathogen allows the fungus to reenter its reproductive state within the carcass, multiply, and then reinfect new susceptible hosts when the burial soil is disturbed.4 The goal of this study was to create a mathematical construct that models the infection rates of dogs residing in valley feverâ€“endemic areas of Texas to display the effectiveness of different preventive measures. We will focus on preventing the burial of infected hosts.
Transmission rate of symptomatic infection
Figure 1: Simulation of the valley fever infection
Burial rate Symptomatic Population at time t Mortality rate
Valley Fever has become a significant concern in the United States. After humans, dogs are the most commonly impacted organisms. Using a model of nonlinear differential equations, the effectiveness of avoiding the burial of dogs in shallow graves as a preventative measure can be determined. By Amy Clanton, Laura Harred, Chris Jones & Devin Light
Recovery rate of asymptomatic infection Mortality rate of pathogens Recovery rate of symptomatic infection
Recovered population at time t Mortality rate
Mortality rate due to infection
Explorations | Fall 2011 51
Figure 2: Simulation of the valley fever infection when the burial rate is 40%
Time Series of Valley Fever Infection, Burial Rate = 40%
Host Population Size
40 30 20 10 0 0
will call the infected population without symptoms asymptomatic and the infected population with symptoms symptomatic. Each category will represent a compartment in our model, which makes the following assumptions: 1. 2.
Valley fever is not contagious. Each new infection is the result of environmental contact. Antifungal medicines do not shorten recovery rate. Climate is not a factor. Sixty percent of infected hosts are asymptomatic; 40% are symptomatic. The data for Webb County are representative of the entire dog population in endemic regions. The soil is free of Coccidioides infectious agents, and there is one confirmed case.
3. 4. 5. 6. 7.
Figure 1 shows our simulation of the valley fever infection. We formed our differential equations for the analysis of the model on the basis of the rates entering and leaving each compartment. 200
Analysis of the Valley Fever Model To use our model in the analysis, we took a population of 8,000 dogs and set the transmission rates to 60% asymptomatic and 40% symptomatic. We set recovery rates to 1 for asymptomatic and 1/9 for symptomatic because recovery time for each population averages 1â€“9 months.5 The birth and death rates for dogs in the United States are 11.4% and 7.9%, respectively, per year, and the death rate of pathogens in the environment is approximately 12% per month.6,7 To find the transmission rate of symptomatic dogs, we plotted the data from Webb County and found a curve that fits these data more accurately than a linear approximation. After finding the disease-free and endemic equilibria of our model, we concluded that the disease-free equilibrium is stable, with a burial rate of less than 57%. With the burial rate set to 40%, a disease-free equilibrium emerges rather quickly. By plotting our model with the burial rate set to 40%, you can see that a disease-free equilibrium is reached rather quickly around 75 months (see Figure 2).
Time Series of Valley Fever Infection, Burial Rate = 80%
When the burial rate is greater than 57%, the symptomatic population will eventually reach an endemic equilibrium (see Figure 2). Figure 3 shows the infection over a longer interval than that in Figure 2. This is because reaching an endemic equilibrium takes a long time for the symptomatic population. If Figure 3 were on the same interval as Figure 2, it would seem to reach a disease-free equilibrium, but as you can see in Figure 3, over time the disease comes back and will reach an endemic equilibrium. Basically, the preceding plots show that when the burial rate of dogs is less than 57%, the infection will reach a disease-free equilibrium, meaning that no evidence of infection will remain in the population. If the burial rate is greater than 57%, the infection will eventually reach an endemic equilibrium. Conclusion
Our goal was to determine the effectiveness of preventive measures relating to valley fever, in particular avoiding the burial of
Host Population Size
140 120 100 80 60 40 20 0 0
Figure 3: The extreme situation when the burial rate is set to 80% of dogs
may include more data over a longer period. Future studies might also include some factors that we considered to be negligible in our study. For example, remission is possible but often takes several years after infection and occurs only in a small portion of the population. Also, although the influence of weather on valley fever infection has been controversial, it is worth looking at as an improvement to the accuracy of the model. Acknowledgments
We thank Alfonso Clavijo, of the Texas Veterinary Medical Diagnostic Laboratory, and Majid Bani-yaghoub, of the Texas A&M Department of Mathematics, for assistance with this project. References 1.
2. 3. 4.
6. deceased dogs that may have been infected with the disease. We created a model of the system that includes a parameter that represents the use of this preventive measure. Modeling software allowed us to create and solve this model with several parameters for which we could find a constant value from real-world data or could vary their values to observe the effects. We gained some insight from our numerical and graphical results and by considering boundary conditions that naturally arise when dealing with a biological system. To ensure a stable disease-free equilibrium, it is best not to depend solely on burial avoidance but instead on a combination of several preventive measures, including decreasing the infection rates and improving
treatments. Ultimately, the model has several strengths and a few weaknesses. We designed the model to help identify—with minimal computing time—factors with the greatest effect on the stability of the endemic and disease-free equilibrium. The model gives a clear picture of the major mechanisms behind a valley fever infection and the efficacy of burial avoidance. We found a curve that fits the data available to us better than a linear approximation, giving us a lower residual and showing that our model is fairly accurate, at least in the time frame of our study. Still, we could refine the model for further study. We drew our conclusions from 10 years of data from a single county in Texas. A more comprehensive study
Chang LS, Chiller TM; Centers for Disease Control and Prevention (CDC). “Infectious diseases related to travel.” 2010. Available from http://wwwnc.cdc.gov/ travel/yellowbook/2010/chapter-5/coccidioidomycosis.aspx. Valley Fever Vaccine Project of the Americas. “A vaccine for valley fever.” 2011. Available from http://valleyfever.com/. Hector RF, Laniado-Laborin R. Coccidioidomycosis—a fungal disease of the Americas. PLoS Medicine 2(1):e2. doi:10.1371/journal.pmed.0020002. Maddy KT, Crecelius HG. Establishment of Coccidioides immitis in negative soil following burial of infected animals and animal tissues, pp. 309–312. In: L. Ajello (ed.), Coccidioidomycosis. Papers from the Second Symposium on Coccidioidomycosis. Tucson, Ariz.: University of Arizona Press, 1967. Mayo Foundation for Medical Education and Research (MFMER). “Valley fever.” Available from http://www.mayoclinic.com/print/valleyfever/DS00695/ DSECTION=all&METHOD=print. New JC, et al. Birth and death rate estimates of cats and dogs in U.S. households and related factors. Journal of Applied Animal Welfare Science 2004;7(4):229– 241. Friedman L, et al. Survival of Coccidioides immitis under controlled conditions of temperature and humidity. American Journal of Public Health and the Nation’s Health 1956;40(10):1317–1324.
Explorations | Fall 2011 53
This is Not a Lizard
The medium of scratchboard offers a unique way to create highly realistic images. This artist balances his affinity for realism with conceptual art, in which an idea takes precedence over aesthetics. By Jacob Patapoff
o explain my piece, I should start with why I chose the medium I worked with. I did my piece on a clayboard, also known as a scratchboard. A scratchboard is composed of a layer of white clay coated with black India ink. The artist can employ any method to scratch off the black layer to reveal the white underneath. I first encountered scratchboards during my sophomore year in high school, and my early scratchboard projects looked nothing like my present piece. Even though the first scratchboard I made was a disaster, I didn’t abandon this special medium. Scratchboards have several features that I like. you can get as detailed as you want
by using methods such as stippling (which I don’t suggest doing, since it takes a long time), or you can scrape away large areas to reveal the underlying white. The only tool I use now is an X-Acto knife, with which I can carefully place the most minuscule dots and scrapes to create or reproduce any image I choose. With the control that the X-Acto knife offers, I was hooked: no colors to mix, no pencils to sharpen—just a blade to replace every so often, in exchange for intricate detail. But no one told me that getting that level of detail requires many hours of work, practice, and dedication. One positive aspect of creating this art is how I can sit down, take time to myself, clear my mind, listen to music, and create art. What could be better? For all these reasons, scratchboards have become my drug. Having described my love–hate relationship with my favorite art medium and method, I’ll now explain why I do what I do. Ever since I first knew what art was, my artistic vice has been the closest imitation of reality. Photorealistic paintings and a variety of realistic sculptures are a few examples. Initially I loved this type of art because it was visually appealing and I could stare at it for extended periods. Examples that resonated with me are artist Gian Lorenzo Bernini, who formed a chunk of inanimate, immobile marble stone into a perfect representation of human form that can convey emotion or movement, and Chuck Close, who created an eight-by-seven-foot painting of a person that has the same quality as a photograph. These masterpieces shatter my universe every time I see them. I have made scratchboards by using the stippling technique that I mentioned earlier. Stippling involves making hundreds of thousands, even millions, of tiny specks to create one large image (or at least that is how it was for me). Ultimately, I got the final product I set out for: an image that could pose as a photograph. However, I wasn’t happy with the result. Maybe it was because countless artists had traveled this specific artistic journey. Maybe it was because I wanted more out of my artwork than a pretty picture. Whatever the reason, I was unfulfilled. For a long time, I reflected on what I like in art and what I wanted to convey in my own creations. I discovered the key element that appealed to me in my favorite art genres: an artist can deceive human senses. An artist
can trick someone into believing, even for a split second, that the art creation is real. In reality, it is only an image imprinted on some material: not living, not breathing, simply a representation of what it truly is. Along with this realization came the introduction of another type of art: conceptual. In conceptual art, the concept or idea takes priority over the aesthetics in a piece. One piece that greatly influenced my work shown here is by artist Rene Magritte titled The Treachery of Images (Figure 1). In his painting he has “Ceci n’est pas une pipe” (“This is not a pipe”) displayed beneath a painting of a pipe. He is trying to convey that the painting is not an actual pipe but rather an image of a pipe. With this piece and many others came my gradual acceptance of conceptual art. Not only did I eventually accept it, I embraced, adapted, and emulated it, which leads me to the explanation of my current piece. If I were to desert my initial love of realistic art, though, I would have to disown myself. My piece embodies the conceptual approach to art while having a visually heavier influence from photorealism and selected pieces from Hellenistic, Renaissance, Baroque, and other periods of sculpture. The method I used to scratch away the black India ink sets my piece apart from being purely photorealistic. Instead of using the typical stippling method that I became so accustomed to, I used a seemingly countless number of lines, scratches, and repeated X shapes to create my image. Anything more than a quick glance Figure 1: Rene Magritte’s The Treachery of Images
gives away that this is not a photograph. However, when I place in the viewer’s mind the thought that this is something that it isn’t— even if just for a moment—I achieve my goal. Finally, the actual subject matter isn’t essential to the overall purpose of the piece. However, I chose to use a lizard as my subject for two reasons. First, reptilian scales have an immense level of detail and I wanted to challenge my artistic abilities. Second, it was something I could stare at for a great deal of time. So whether you’re walking up to a wall that my piece is hanging on or you catch a quick glance of it out of the corner of your eye, that moment of belief is all I wanted to achieve.
Explorations | Fall 2011 55
Market pressures have shifted cattle feeding practices away from grain and corn. However, previous research has linked higher bacterial shedding in cattle fed wet corn distillers grains, and this increased presence of Salmonella may be linked to increased antimicrobial presence. By Santiago Ramirez
hen oil prices skyrocketed during the summer of 2008, ethanol emerged as an attractive alternative. Corn and grain prices increased with the demand for ethanol production. This increase left cattle producers searching for cheaper food sources for cattle. The availability of ethanol coproducts, including wet corn distillers grains (WDG), offers a means to offset total corn use and reduce the cost of cattle feed. Although WDG are economical alternatives to corn and grain, these new dietary feeds might increase the levels of disease-causing bacteria in bovine feces or increase the antibiotic resistance of bacteria inhabiting the bovine intestine. The presence of more pathogens in a calf’s gastrointestinal tract usually means more pathogens shed in the feces, which in turn increases the risk of contaminated meat or contamination of plant crops fertilized with cattle manure. Antibiotic-resistant pathogens can also be transmitted from cattle to people, complicating treatment of bacterial infections in humans. Vigilance by cattle producers and adherence to federal food safety protocols at harvest generally keep bacterial pathogens away from our dinner plates, but antibiotic resistance is difficult to combat. A 2010 study documented greater amounts of Escherichia coli O157:H7 in feces from cattle that ate WDG.1 Our study sought to determine whether cattle fed with WDG excreted more Salmonella and whether Salmonella bacteria isolated from such cattle were more resistant to antibiotics.
Background Despite continual improvements in food safety protocols, we are all at risk of exposure to Salmonella enterica, a gastrointestinal pathogen that causes diarrhea. According to the Centers for Disease Control and Prevention (CDC),2 at least one Salmonella outbreak has occurred every year for the past 5 years. These outbreaks affect more than 40,000 people in the United States every year. Salmonella is found in a variety of products, including meat, eggs, tomatoes, sprouts, and even peanut butter and pistachios. Ingesting products con-
Figure 1 Figure 1: How dietary changes in cattle could influence antibiotic resistance in human pathogens
taminated with Salmonella leads to a condition called salmonellosis. This foodborne disease causes diarrhea and, if left untreated, a potentially deadly infection throughout the body. As with most bacterial infections, antibiotics can treat Salmonella infections. Unfortunately, some Salmonella species can develop or acquire defense mechanisms against these drugs. To make matters worse, different species of bacteria can, through a process called bacterial mating, share the genes that allow resistance to drugs. Therefore, other bacteria living in the bovine intestines (e.g., E. coli and Klebsiella) can benefit from Salmonella’s genetic hard
↑ Complications from Bacterial Infections ↑ Prevalence Human Foodborne Disease ↑ Contamination of Crops
↑ Shedding of Resistant Salmonella ↑ Antimicrobial Resistant Pathogen Population
Feeding of Distiller’s Grain Byproducts
Different strains of bacteria can resist the same drug in many different ways. Penicillin, for example, prevents bacteria from making a cell wall, without which they will die. One strain may produce an enzyme that breaks down the penicillin, rendering it useless, whereas another may change the makeup of its cell wall so that penicillin canâ€™t bind to it. Both strains are resistant to the penicillin, but they use distinct mechanisms to evade the drug. Both methods of resistance result from products of different resistance genes, which remain as part of the bacteriumâ€™s DNA. By determining which antimicrobial resistance genes are present in a bacterial population and how they transfer between bacteria, we may be able to prevent further resistance. The presence of antibiotic-resistant organisms in food animals represents a risk to human health (Figure 1). Feeding cattle with WDG may affect the prevalence of pathogens such as E. coli O157:H7.1 Methods
We estimated the associations between feeding WDG, fecal shedding, and antimicrobial resistance of S. enterica in commercial cattle feedlots. We took fecal samples from cattle at six feedlots in the Texas Panhandle (three that fed cattle with WDG and three that did not). From each feedlot, we randomly selected six pens. We collected feces from 10 fresh fecal pats in each pen and from them grew cultures of Salmonella. We therefore tested 360 fecal samples. We performed standard antimicrobial susceptibility tests on every Salmonella isolate.3 My role in this project was to determine which specific genes made the Salmonella isolates resistant. To do this, I used the polymerase chain reaction (PCR) to identify specific genes. PCR uses two short pieces of DNA, called primers, which are the beginning and end points corresponding to the gene of interest. To determine whether resistance genes were present, I combined DNA from resistant Salmonella isolates with primers for specific genes. If the multiplied gene is present, it will be visible under UV light, using a process called gel electrophoresis. We used PCR to test for three genes that can make Salmonella resistant to chloramphenicol: cmlA, catA, and flo. Results
From 120 Salmonella isolates that we cultured, 22 were resistant to at least one of the following antibiotics: chloramphenicol, tetracycline, sulfisoxazole, streptomycin, and kanamycin. Feeding cattle with WDG did not substantially affect the prevalence of Salmonella in fecal pats, but it did increase the likelihood of resistance to certain antibiotics, including sulfisoxazole, tetracycline, and streptomycin (Figure 2). Interestingly, some
Salmonella isolates from feedlots that fed WDG were resistant to chloramphenicol (Figure 2). This was not the case in feedlots that did not feed WDG. Feeding cattle with WDG also increased the likelihood that Salmonella isolates would be resistant to multiple antimicrobial drugs (Figure 2). Seven of the 120 Salmonella isolates were resistant to chloramphenicol. However, none of these isolates tested positive for cmlA or catA (data not shown). We are currently testing the isolates for the presence of the flo gene. Discussion
We were initially intrigued that seven of our Salmonella isolates were resistant to chloramphenicol. More than 30 years
ago, the U.S. Food and Drug Administration banned the use of chloramphenicol in foodproducing animals such as cattle because of concerns about a rare but deadly side effect of the drug in people. Even though this drug has not been used recently, resistance to it persists. Two explanations might account for this finding: First, bacteria evolve rapidly. Selection pressure means that bacteria keep only those genes that help them survive. The closer on the genome that an unnecessary gene is to an essential gene, the more likely the unnecessary gene is to remain. The gene for chloramphenicol resistance might be near another gene that the bacteria need to survive, so the chloramphenicol resistance gene survives. We did not investigate this possibility. Second, the isolates are resistant to another antibiotic similar to chloramphenicol
Explorations | Fall 2011 57
and are thus resistant to both antibiotics. Drugs similar to chloramphenicol that are effective but do not have dangerous side effects have been created. One of these, florfenicol, is approved for treating sick cattle. Bacteria that are resistant to florfenicol are also often resistant to chloramphenicol. For this reason, we are currently testing our isolates for the presence of a gene (flo) that makes Salmonella resistant to florfenicol.3 We are currently testing our isolates for resistance genes to tetracycline, sulfisoxazole, and streptomycin. Conclusion
With soaring oil prices brought on by oil scarcity and ongoing political unrest among major oil exporters in the Middle East, the pressure to find more economical alternatives to fossil fuels will continue. The demand for corn for biofuels continues to rise. Cattle producers will probably continue to use WDG for feeding cattle. The U.S. Department of Agriculture has already forecast an even greater demand for
The Department of Veterinary Pathobiology and Texas AgriLife Research funded this project for S.D. Lawhon and J.B. Osterstock.
I thank Courtney Lowrance, Jennifer Lewis, and Ted McCollum for assistance in sample collection and Janell Kahl, Doris Hunter, Christine Shields, and Scott Stevens for help with sample processing. I also thank Ben Weinheimer, Texas Cattle Feeders Association, for assistance with identifying feedlots for participation, and the managers and employees of participating feedlots.
Jacob ME, Paddock ZD, Renter DG, Lechtenberg KF, Nagaraja TG. Inclusion of dried or wet distillers’ grains at different levels in diets of feedlot cattle affects fecal shedding of Escherichia coli O157:H7. Applied and Environmental Microbiology 2010;76:7238–7242. Centers for Disease Control and Prevention. “Salmonella food poisoning.” Available from http://www.cdc.gov/salmonella/index.html. Accessed 2010 March 5. Clinical and Laboratory Standards Institute (CLSI). Performance Standards for Antimicrobial Disk and Dilution SusceptibilityTests for Bacteria Isolated from Animals; Approved Standard—Third Edition. CLSI document M31-A3. Wayne, PA: CLSI, 2008. Financial Times. “Commodities.” Available from http://www.ft.com/cms/s/0/ f4189e60-404a-11e0-9140-00144feabdc0. html#axzz1Ev6DoyPo. Accessed 2010 March 3.
Proportion of Isolates
ethanol in 2011.4 In exploring the feasibility of replacing oil with corn-based ethanol, we should consider how using ethanol-based feeds can affect our food and its safety. We might be favoring our pocketbook at the expense of our health.4
0.6 0.5 0.4 0.3 0.2 0.1 0.0 0
Number of Antimicrobials
Proportion of Resistant Isolates
No WDG WDG
P < 0.01
P < 0.01
P = 0.01
0.10 0.00 x
Antimicrobial x - Model would not converge due to lack of observations in "No WDG" group
Figure 2: Antimicrobial resistance in Salmonella isolates. (A) Distribution of antimicrobial multidrug resistance to common antimicrobials among Salmonella isolates obtained from commercial feedlots as a function of feeding WDG. (B) Proportion of isolates resistant to common antimicrobials among Salmonella isolates obtained from commercial feedlots as a function of feeding WDG.
Applying Forensic Engineering to the Construction Industry A
ccording to the American Society of Civil Engineers (ASCE) Report Card for America’s Infrastructure, “years of delayed maintenance and lack of modernization have left Americans with an outdated and failing infrastructure that cannot meet our needs.”1 In the last 20 years, the reports have indicated an overall decline in quality and lack of improvement. In the most recent report, published 2009, ASCE gave the United States a D: poor infrastructure. What infrastructure characteristics does ASCE evaluate as poor? Fifteen different categories currently make up the ASCE grade report: aviation, bridges, dams, drinking water, energy, hazardous waste, inland waterways, levees, public parks and recreation, rails, roads, schools, solid waste, transit, and wastewater. Although not every aspect of the U.S. infrastructure can be addressed at once, this article will deal with infrastructure in relation to building construction. The ASCE categories all encompass at least one type of building. To improve our nation’s infrastructure, we first must ensure that any general contractors who repair old or create new infrastructure know and apply the critical aspects of building construction. Current Situation Methodology
I used a case study analysis to acquire information regarding critical problem
areas in building construction. I analyzed 13 failures on the basis of the cause of failures, fatalities, and injuries. I identified three types of causes of failure: construction, engineer, or owner-related errors. The construction category is further partitioned into more specific errors, as shown in Figure 1. The projects chosen, listed in Table 1, exemplify some of the best-known and fatal (or potentially fatal) building construction disasters of the last century. I studied two divisions of projects: (1) apartment or office-style buildings and (2) buildings such as warehouses or public recreation areas. I chose these particular projects because they embody important lessons that can apply to all building construction. Kemper Arena and Ronan Point are two different types of building construction, yet the same error occurred in both failures. Kemper Arena did not have enough roof drains, and the wind factor for high-rise buildings at Ronan’s Point was inaccurate. Both errors are directly related to building code. All 13 projects studied in this research effort are not explicitly infrastructure construction issues, but the potential lessons that we can learn from their failures are relevant to it. Using several approaches, I gauged the failure of each project. Table 1 briefly describes the failures. Fatalities were not the only measure of failure in building construction; several projects revealed neither fatalities nor injuries. However, major financial
American infrastructure has received poor grades from the American Society of Civil Engineering. By analyzing notable past failures, engineers can identify critical problems which should be avoided in future or corrective engineering projects. By Robert Pinkston
Explorations | Fall 2011 59
damage involves all parties. Although not every project studied was fatal, they all had the potential to be. For example, in the Hartford Civic Center collapse, Norbert Delatte Jr. notes that “Had the failure occurred just a few hours before, the death toll might have been hundreds or thousands.”2(p.174) Classification and Detail of Errors
The public, which uses these buildings, needs to understand the errors, and I will briefly describe what each indicates. Inconsistencies with submittals refers to a paperwork problem. Submittals are documentation regarding many aspects of a construction project ranging from requests for information, applications for payment, and shop drawings. Shop drawings (blueprints) caused the most problems.
Next is nonadherence to building code and permits. A mandatory inspection of the foundation of a building is an example of a building code requirement. Nonadherence to building code occurs when inspections are not performed or permits are not obtained. Permits are closely tied to building codes and usually require specific blueprints (indicating that a project conforms to building codes) that must be submitted to a city council. Poor construction quality occurs when subcontractors do substandard work. Contractors should perform all work in accordance with reliable quality standards. Poor construction quality is related to the next error, which often serves to compound it: poor subcontractor management by the general contractor. The general contractor must supervise the subcontracted work to ensure that the project is completed on time and on budget. Poor man-
agement will occur if the general contractor has no personnel on the jobsite or if subcontractors are communicating with a party other than the general contractor. The general contractor must also stay alert for potential problems in quality. Another error involved not following plans and specifications. For example, if the general contractor makes an unauthorized change to the specifications without approval by the engineer of record or the owner, the error can have unintended consequences. Unauthorized changes also overlap with nonadherence to building code, such as when specifications call for inspections by certified engineers or laboratories and contractors ignore this rule. Finally, gross negligence occurs when a general contractor discovers a serious problem but minimizes it to save cost or time. Delatte points out this error in the construc-
Table 1: Brief description of 13 failures including fatalities and injuries
tion of the Hartford Civic Center Stadium: “Even though the architect recommended that a qualified structural engineer be hired to oversee the construction, the construction manager refused, saying that it was a waste of money.”2(p.182) Results
Causes of Error
All six errors exemplified at least one of four root causes. These roots are the critical problem areas that lead to failure and could occur on any jobsite: • •
Lack of responsibility is at the root of errors that result in failure. When projects change owners or general contractors, important details about the project and its construction history could be lost or forgotten. This loss or omission opens the door for overlooking inconsistencies with submittals or details concerning a project’s plan and specifications. Nonconformance with a building code could result. The errors can increase quickly. For example, the Hyatt Regency Crown Center in Kansas City, Mo., is considered the worst structural failure in U.S. history. A combination of errors
triggered this accident: a lack of responsibility in the form of changes in design, not checking building code conformance, and poor recordkeeping of important changes in plans and specifications. Delatte notes, “It is important for all parties to understand fully and accept their responsibilities in each project.”2(p.19)
Poor communication finds its way into many aspects of construction. Inconsistencies with submittals, poor subcontractor management, and failure to follow plans and specifications can result from lack of communication between appropriate parties and documents that lack completeness, clarity, and accuracy. A general contractor should have at
least one employee in constant communication with the subcontractors for any one job. If the general contractor’s employee and subcontractors do not maintain communica-
in downtown Boston on 2000 Commonwealth Avenue. It was a 16-story apartment building with many subcontractors performing work. The general contractor had only one onsite employee, and the subcontractors issued contracts directly to the owner. This practice suggests why little or no communication occurred between the general contractor and the subcontractors. The same error occurs when keeping track of changes relevant to plans and specifications. Changes must be communicated and noted to ensure that they are safe and conform to building code requirements.
“Lack of responsibility is at the root of errors that result in failure”
tion, actual construction progress cannot be tracked and problem areas noted when they occur. Inconsistencies with submittals will occur if the general contractor, owner, architect, engineer, and subcontractors do not properly communicate. This error occurred in the collapse of a high-rise apartment building
Organization may seem like the
Not following plans/specs 29%
gross negligence, but ethical violations can also occur in plans, specifications, and poor construction quality. Every general contractor should know not to change the structural aspects of the plans and specifications without explicit approval from the certified engineer of record. This ethics-based rule extends to building codes as well, especially on large projects, which require critical inspections for construction to proceed to the next phase. Poor construction quality can begin as an ethical issue for subcontractors who knowingly perform substandard work, but ethical practice becomes a general contractor’s responsibility when subcontractors’ work is not checked. General contractors should try to avoid the potential risks associated with this situation by always ensuring that work
most trivial of all problem areas—or as if it shouldn’t even be an issue—yet serious errors are directly related to it. The most common of these are inconsistencies with submittals and nonadherence to building code and permits. Even if appropriate communication occurs for submittals to be addressed and issued, poor recordkeeping can potentially nullify the benefit of good communication. The submittal could be lost or forgotten in channels between the general contractor, architect, owner, and engineer. The same concept holds true for building codes and permits. Ethics
The final problem in many construction failures can be traced to lack of adherence to engineering ethics. Ethical violations surface in construction errors that show
Inconsistencies Non-‐adherence to with submittals building code/ permits 9% 19%
Gross Negligence of pressing issues 5%
Poor Subcontractor Management 9%
Poor Construction Quality 29%
Figure 1: Errors contributing to failure
performed by subcontractors is adequate. Conclusion
The loss of human life is the greatest failure that can occur in construction. When repairs are made and new facilities are built for America’s infrastructure, engineers should take great care to ensure that the job is done correctly and safely. Engineering ethics and the education that engineers receive require no less. Through awareness of these problem areas during construction and knowing how to avoid them, general contractors will decrease the risk of failure for the structure and make it a safer environment for construction workers as well as future occupants. Following accepted rules, ethics, and procedures will
lead to improved infrastructure and fewer injuries and fatalities. References 1. 2.
American Society of Civil Engineers. “Report Cards.” Infrastructure Report Cards. 2009. Available from http://www.infrastructurereportcard.org/report-cards. Delatte, Norbert J. Jr., Ph.D., P.E. Beyond Failure: Forensic Case Studies For Civil Engineers. Reston, VA: ASCE Press, 2009.
Explorations | Fall 2011 61
Memorable Design Logo
A well-designed and identifiable logo is essential for the success of an organization or a business. The characteristics of a successful logo can be identified through literature review and surveys. This information may streamline the formation of a business’ identity, making the process more cost-effective for new and nonprofit organizations. By Lori Lampe
et’s all admit that the number of impressions we make is far less important than the quality of impression we make.”1 Joe Duffy’s explanation of the importance of good design for branding and logos is the primary subject of my research. A logo’s design is important to the overall appearance of a business or organization, but what makes a logo successful and memorable? To answer that, I investigated specific design characteristics of effective logos. From previous observation, I believe that a business’s success can be attributed to the strength of its identity. Nonprofit organizations typically do not have the resources to create well-designed identities, not having room in their budgets for graphic designers. To help investigate this problem, I researched the basic design principles for successful logos. This endeavor will help nonprofit organizations improve the communication of their cause to the public. My literature analysis included themes on successful logo and brand design, colors and symbols as important elements in graphic design, case studies on specific logos, and graphic design for nonprofit organizations. I also looked into successful logo and brand design from the perspective of key graphic designers. In his book Emotional Branding, Mark Gobé noted that “emotional branding is about
crafting an intimate and reassuring experience for each customer.”2 (p.103) He states that a welldesigned identity is unforgettable and emotionally charged, much like the Apple logo. My research has exposed several design aspects that aid in recognizing particular logos. I plan to investigate two of these, color and symbol, in more depth because I believe that one or both may be the most memorable aspect of a logo. A logo’s color strongly influences a person’s perception of a particular company or organization, based on emotional and psychological effects associated with that color. A logo’s symbol is also extremely significant in the memory it sparks and the lasting impression it leaves. Two case studies have furthered my knowledge in graphic design. The logos for The Bahamas and Project 7 are successful identities that have been created in the late twentieth century and early twenty-first century. Analyzing these logos helped me in my research for successful logo design characteristics. I also looked into nonprofit organizations’ use of graphic design, finding that a successful logo design can truly help the nonprofit influence its community and inform community members on specific issues relevant to them. In his foreword to Designing for the Greater Good, David Hessekiel stated that “after nearly a decade of studying cause marketing campaigns, I know that strong design is absolutely crucial
to success. A good design team can breathe life and power into work that might otherwise be lost in the twister of advertising messages that swirls around us from morning to night.”3(p.7) Having carried out a literature review, I can affirm that a well-designed and identifiable logo is a key ingredient for the success of a particular business or organization. Mark Gobé is just one of several authors stating that a company’s identity needs to personally connect with each consumer in an invigorating way. People need to understand what the company or organization represents and need to be able to trust it. The power of a simple logo can accomplish all this. Methods
After reading several books on successful logo and brand design, I found that the best way to determine the memorability of a logo was to survey a large group of people. To invite people to take the survey, I sent an e-mail to friends and family, as well as sending a message on Facebook. The message gave some general information about this research project and the survey. They survey included optional questions regarding the participants’ race, ethnicity, and sex, which I recorded to help determine biases in the results. Surveying as many people as possible from different ages, ethnicities, and backgrounds allowed me
Figure 1: The original logo on the left was shown in the survey on a separate page before the series of altered logos. Participants had to choose the original logo from the mix of altered logos on the right. (Source: http://www.betteratmservices.com/)
to generate an overall consensus. I also asked everyone receiving the e-mail or Facebook message to pass it along to their friends and family, particularly ones of different ages, ethnicities, and backgrounds. Although most message recipients were college students, the media I used were the only practical means available to invite a large and varied group. I created and distributed the survey by using Survey Monkey, a website that can be used for research, event planning, customer feedback, and other survey needs. Originally I was going to leave the survey open until I had received enough responses for reliable results; however, it was open for only one month, because participation slowly declined. I officially closed the survey when I received 102 responses, the number necessary to achieve statistical reliability. To begin the questions for a specific logo, the survey asked participants to determine what they think is the most defining characteristic of the logo. In other words, what made this particular logo memorable? The participants had five choices: color (of the font or symbol); font (type style); form (arrangement of symbol and text); symbol (picture, image, or graphic); and other, where they could write out what they thought was the most defining characteristic. These questions had two purposes: to get a clear idea of what the participants thought was the most
important characteristic (which could differ from what the overall results show) and to show the participants a glimpse of the original logo before they saw an altered version on the next page of the survey. I also designed this question to spur the thinking of participants and the idea of first impressions. The participants had never seen most, if not all, of the logos used for this survey. The survey asked another type of question, called “Where’s Waldo.” I chose a few logos to manipulate so that participants could see other versions of it. I altered the logos with Adobe Photoshop, image editing software for making subtle or drastic alterations to pictures. I made subtle changes to a specific characteristic of each logo, such as changing the symbol of the Better ATM Services logo. This can be seen in Figure 1, where the original logo is larger on the left and the altered logos are on the right. On a page separate from the original logo, the survey asked participants to pick out the original logo, which was mixed in with several altered versions. This task caused participants to examine the choices to notice the differences. Another question for testing characteristics of a memorable logo involved my altering that specific logo in each area of design: color, font, symbol, and form. For example, I made one color alteration, one font alteration, one symbol alteration, and one form altera-
tion for the A-Town logo. The survey asked participants to determine how each altered logo was different from the original. It was basically a game of memory because they saw the original logo just a moment before and then had to remember exactly what it looked like. Each colored logo can prompt a variety of memories, aiding the first impression of it. By slightly shifting the colors of some logos, I hoped to use this survey question to change those original memories and confuse the participants. I applied this idea to the other design characteristics that I transformed (font, symbol, and form). The survey also asked participants how subtle the differences were and how easily recognizable they were. I used these questions for personal purposes— to possibly reveal areas in the survey where changes to logos did not affect participants’ memory. Results & Conclusions
The initial logo that participants saw was that for Better ATM Services. Results of the survey revealed that 68.2% of participants thought that the symbol was the most important characteristic of this logo. Specific aspects that some participants noticed were the 3D nature of the logo and the way the image was breaking out of its borders. All other answer choices were split between color,
Explorations | Fall 2011 63
Figure 2: The final logo design for Ripple Africa, a nonprofit organization based in the United Kingdom. They wanted an identity that would speak to the people of Africa of the organizationâ€™s grassroots approach.
form, combination, and other. For this logo, I altered only the symbol. When asked to pick out the original logo from the mix of altered logos, 44.2% of participants chose correctly and 34.9% had a pretty good idea that they were right. An interesting endeavor would have been to see which of the correct respondents knew that they had the right answer, as opposed to those who just happened to guess the right answer. Nonetheless, I believe this was a successful logo alteration because of the distribution of responses. The last question of the survey dealing with logos was an ultimate memory game, which included all the logos that participants had seen on this survey. The four highest votes were Ideapark with 97.5%, A-Town with 86.4%, Ticklefish with 82.7%, and KAUST with 81.3%. The only one of these with an altered version of the original logo was Ticklefish, for which I changed the font. The second-highest score for altered logos was Better ATM Services, a symbol change, with 76.3%. For both these altered logos, participants felt they had a good idea of what was changed. Interestingly, the two altered logos that participants remembered best were the first and last logos they saw. I also did not expect that participants would ultimately more easily identify the logos that I had altered in several different ways. I had altered the two highest-ranking logos, Ideapark and A-Town, in four different ways (symbol, color, form, and font). Although I now realize that I could have refined my research methods in several ways to achieve more valid results, my survey
still yielded the answer to my original hypothesis that I had hoped for. The most defining characteristic for six of the seven logos that participants saw was symbol. Although a combination of elements was the second-mostdefining characteristic, the next single aspect after symbol was formâ€”the arrangement of symbol and text. I had expected to find color as one of the top two important aspects of a logo; however, because the color differences were hard to notice, we can conclude that the distinctiveness of color on a logo is important to the overall originality of the design. At the end of the survey, an anonymous participant noted being red-green color-blind, and so this person had trouble noticing slight color differences in the alterations. If a logo is going to have more than one color, it might be best to have distinct colors that would not affect most color-blind people. From these results, I have created a general list of principles to follow when designing a logo: 1. 2. 3. 4. 5.
The logo should have a prominent symbol that is bold and easily recognizable. Avoid intricate details, and instead take advantage of simple details, such as clever name, typography, or symbolism. The color should be distinctive to the organization or company so that it gives the logo individuality. Stick to simple vector art that is clean and uncluttered. The design needs to be versatile to fit on a variety of different media and adapted for
a variety of uses.
All these characteristics are derived from the survey results on particular logos used, as well as insights from key authors and designers in the graphic design industry. On the basis of this list of characteristics for successful logo design, I created a logo for a nonprofit organization based in the United Kingdom, called Ripple Africa. After consulting with the team on what they wanted their logo to represent, I created a result that satisfied their needs and should be a memorable logo. See Figure 2. References 1. 2. 3.
Duffy J. Brand Apart. New york: One Club Publishing, 2005. GobĂŠ M. Emotional Branding. New york: Allworth Press, 2001. Top P, Cleveland J. Designing for the Greater Good. New york: Collins Design, 2010.
Anas Al Bastami Anas Al Bastani is a junior Electrical and Computer Engineering major from Syria. He is studying at Texas A&M’s campus at Qatar. Although his background is in engineering, Bastani became interested in international relations while taking a political science course, and was inspired to continue this research and produce a scholarly paper on conflict analysis. Bastani plans to pursue a master’s degree in electrical engineering, and possibly a Ph.D., while focusing on applied research in the industry.
Alyssa Blessing Alyssa Blessing is a senior Kinesiology major from Georgetown, Texas. She is specializing in motor behaviors. Because of her interest in children’s motor development, Blessing is planning for a career in Pediatric Physical Therapy, and hopes to work with special-needs children. This interest also prompted her involvement in this field of research at Texas A&M and inspired her article on infant motor development for Explorations.
Meet the Authors
Explorations | Fall 2011 65
Amy Clanton, Laura Harred, Chris Jones, and Devin Light are seniors at Texas A&M University who became interested in math modeling while conducting research in a class they took taught by Dr. Majid Bani-yaghoub. Amy Clanton is a Mathematics major from Dayton, Texas. After graduating, Clanton will be student teaching in the fall of 2011. After student teaching, she hopes to attend graduate school in Spring 2012 and eventually become a high school math teacher.
Laura Harred is a Mathematics major from Midlothian, Texas. Harred will be attending graduate school for Biological Oceanography starting in the fall of 2011 at Texas A&M University.
Chris Jones is an Electrical Engineering major from Marshall, Texas. After graduating, Jones hopes to gain experience working in the industry and then possibly attend graduate school to obtain a Masters in Business Administration. Devin Light is a Mathematics major from Boerne, Texas. After graduating, Light will be studying applied math at the University of Washington in Seattle, Washington.
Esha Farooqi Eesha Farooqi is a sophomore Biomedical Science major from Harker Heights, Texas. After earning her bachelorâ€™s degree, Farooqi hopes to attend medical school. She plans to use her medical degree to establish free health programs in her home country, Pakistan. Although busy with her academic studies and future plans, Farooqi also finds time for her passion for photography. She enjoys seeing the world from new perspectives and finding beauty in the smallest things, and wants to share this through her photography.
Amy Clanton, Laura Harred, Chris Jones, & Devin Light
Jordan Green & Chris Pannier Jordan Green and Christopher Pannier are both senior Nuclear Engineering majors. Green is from Lafayette, Louisiana and Pannier is from Pembroke Pines, Florida. Both are Presidentsâ€™ Endowed Scholarship and National Merit Scholar recipients, and Pannier received the 2011-2012 Sophomore Undergraduate Scholarship from the American Nuclear Society. Green is graduating in May 2011 and will be an engineer at the South Texas Project Electric Generating Station. Pannier also plans to be a professional engineer and continue to participate in nuclear materials science research. Their joint research project was inspired by a question posed by the engineers at the South Texas Project. In his spare time, Green writes screenplays and is a sports and comic book fan. Pannier is an aviation and aerospace enthusiast.
David Glasheen David Glasheen is a junior from Lubbock, Texas. Glasheen is double majoring in History and Russian, and plans to attend law school in the future. He has always been passionate about space and space exploration, which led him to take the History of Space Exploration for his senior history seminar. He began focusing on the Corona program after undertaking original research on recently declassified documents in the Eisenhower Presidential Library. Glasheen is especially grateful to Dr. Coopersmith, his history seminar professor, who was instrumental in the research process.
Explorations | Fall 2011 67
Yvette Halley yvette Ashton Halley is a senior Biomedical Science major from Spring, Texas. She will be attending Texas A&M in Fall 2011 as a graduate student in the genetics department. In 2011, Halley presented her undergraduate research in genetics during Texas A&Mâ€™s Student Research Week, winning second place, and at the Texas Genetics Societyâ€™s annual meeting, where she took first place. Halley was inspired to write her article on animal conservation because her advisor, Dr. Jan Janecka, is passionate about feline conservation, and snow leopards in particular.
Lori Lampe Lori Lampe is a senior Environmental Design major graduating in Spring 2011. Lampe has a passion for graphic design and is seeking a career as a graphic designer or architect. She also wished to give back to nonprofit organizations, and felt the best way to do so was to research memorable logo design and create a list of common characteristics that can be used by any company or organization. This research led to her article on graphic design for Explorations.
Brett Christopher Lindell is a senior Maritime Studies major at Texas A&M’s Galveston campus, where his classroom is literally the ocean. Lindell’s creative work was inspired by his passion for all aspects of people’s interactions with great bodies of water, and especially the small boats which are crucial to maritime cultures. Lindell decided that rather than just read about these boats, he would recreate the experience by building one himself. He selected the famous Banks dory as his model, and chronicled his journey in a creative piece for Explorations. Lindell’s future plans are not set in stone, but he has not yet ruled out world domination.
Jacob Patapoff Jacob Patapoff is a freshman Environmental Design major from McKinney, Texas. In addition to his bachelor’s degree, Patapoff also plans to pursue a master’s degree. He hopes to work for an architectural firm and become a licensed architect. Patapoff is inspired by art of all kinds; his artwork featured in Explorations was influenced by his passion for photorealism and conceptual art.
Explorations | Fall 2011 69
Robert Pinkston Robert Pinkston is a senior Construction Science major from Temple, Texas. He plans to finish his Construction Science degree and also begin pursuing a degree in Civil Engineering. Pinkstonâ€™s article on the construction industry is inspired by his interest in engineering and construction-specific research. He is grateful Dr. John Nichols for encouraging his interest in engineering, as well as to his faculty advisor, Dr. Boong Ryoo, for mentoring and supporting his research interests.
Robert Scoggins Robert Scoggins is a junior Communication major from Austin, Texas. His inspiration for writing about Net Neutrality came from his desire to write about an influential topic in contemporary public policy. After graduating, Scoggins hopes to obtain a Masters in Public Affairs with a possibility of also obtaining a Masters in Business Administration.
Santiago Ramirez Santiago Ramirez is a senior Biomedical Science major from Bogota, Columbia. His passion for research in Microbiology started when he began working in Dr. Sara Lawhonâ€™s lab. He enjoyed researching about Salmonella because it blended his interests in antibiotic resistance, the oil crisis, and bacteria. After graduation, Ramirez plans to apply to medical school and pursue a career in Family Medicine. Ramirez says that as his most influential mentor, Dr. Lawhon has helped him gain knowledge of skills and methods in Microbiology that will help him as a future physician.
Kevin Stiles is a senior Biology major from The Woodlands, Texas. He has always been interested in speciation and the forces that cause it. He decided to go on a trip to Costa Rica because it gave him the chance to fulfill his dream of studying organisms in a tropical environment. He chose to work with dart frogs because he wanted to try something different and learn more about an organism that is abundant in Costa Rica. Stiles would like to thank Dr. David Baumgardner and Dr. Angela Witmer for supervising the trip to Costa Rica and encouraging him in his research. Stiles plans to attend graduate school, obtain his Ph.D. and conduct research in Genetics and Ecology. He hopes to ultimately teach at the collegiate level and encourage undergraduate research.
Explorations | Fall 2011 71
Leslie Swirsky Leslie Swirsky is a sophomore Biomedical Science major from Lake Jackson, Texas. She came became interested in researching osteosarcoma while interning at IsoTherapeutics Group under the supervision of Dr. Mark Lenox and Keith Frank. In addition to her interest in veterinary medicine and animals, Swirsky is also interested in art. After graduating, she hopes to attend Texas A&M School of Veterinary Medicine and start her own practice as a veterinarian or open an animal shelter.
Justin Whisenant Justin M. Whisenant is a senior from College Station, Texas, double majoring in Forest Management and Spatial Science. He first became interested in studying the forest ecosystem when he lived in Alaska for two years. He became involved in Dr. Jason Vogelâ€™s forest ecosystem research and then conducted an experiment under him as an Undergraduate Research Scholar. After graduating, Whisenant hopes to work for the U.S. Forest Service as a forester because he truly believes that the United States Forest Service is one of Americaâ€™s key players in working toward sustainable natural resource management.
Josh Wilson is a sophomore Molecular and Cell Biology major from San Antonio, Texas. His interest in behavioral biology and his curiosity in regards to female swordtail fish preference in mating inspired his work. After graduating, Wilson plans to attend medical school and pursue a career in Neurosurgery.
Blanca Tovar is a sophomore General Studies major from Fort Worth, Texas. Her inspiration for her work comes from the Mayan culture, but her true inspiration is her family. Her second family, the DREAMERS (Development, Relief and Education for Alien Minors), also influenced her work as they show the strength of spirit which underlies her piece. In the future, Tovar hopes to be a Visualization major, continue making art, and pursue a career that will allow her to fulfill her artistic dreams.
Matt young, born and raised in Bryan, Texas; is a senior Communication major at Texas A&M University. He hopes that his photography and design will add a little bit of beauty to the world every day, presenting a new way of looking at the more poignant parts and patterns of life: The people, the places, the objects, and the implications of experiencing them; their tone, their texture, and their trailing scent.
Explorations | Fall 2011 73
SubmiSSion GuidelineS WHO CAN SUBMIT A PROPOSAL
Any undergraduate student currently enrolled at Texas A&M University who is actively pursuing research, creative, or scholarly work or has done so in the past. All submissions must be sponsored or endorsed by a faculty member at Texas A&M University. Explorations publishes student research and scholarly work from all disciplines. FORMAT FOR PROPOSALS
When submitting your proposal for consideration, please include the following: • Name • Email address • Phone number • Department • Classification • Area of research • Name and contact information of your faculty advisor/mentor • Title of the proposed project • your contribution or role in the research • An abstract of no more than 250 words. The proposal should provide an overview of the project’s objectives and methods. It should also include a description of the project’s importance to the student’s field of study and to others outside the field.
NOTE: Because Explorations is a multi-disciplinary journal targeting a general audience, please use non-technical language in your proposal. Necessary technical words must be defined. For examples of appropriate abstracts, please see [attachment or link to our sample abstracts]. Additional information on the journal is available at http://honors.tamu.edu/Research/Explorations.html. FORMAT FOR CREATIVE WORKS • • • •
Only one submission per student All creative work requires a faculty endorsement. A faculty member in the field of your work must approve your piece for publication in a serious scholarly journal. If you have difficulty locating a faculty member to review your work, Explorations may be able to provide suggestions. All genres of creative work are welcome; however, due to the requirement for a faculty endorsement, please remember that your submission should relate to creative work currently being taught at the university. your work must be accompanied by a descriptive scholarly sidebar of 500-750 words. The sidebar must include: - Why did you choose this topic? - Who are your creative influences? - How has this style or medium help you communicate your idea? - What studies were done to develop your piece? How did they contribute to its persuasiveness, depth, vision, or styling? Please limit prose and poetry submissions to 3500 words. This word limit includes your scholarly sidebar, a minimum of 500 words.
DEADLINE FOR SUBMISSIONS: To be announced. See http://honors.tamu.edu/Research/Explorations.html
Proudly Supported by