THE TEXAS A&M UNDERGRADUATE JOURNAL Genetic Factors Associated with Coat Color and Health in
The Bow Tie? Cap-and-Trade IN THIS Using Mutant Mice Why and Global An Interview with to Understand ISSUE ❖ President Loftin ❖ Compromise Seizures
Fall 2013 | Volume 5
explora t io n s
Cover art courtesy of Amber Shamma, a freshman Visualization major from Friendswood, Texas.
Fall 2013 | Volume 5
Contact Explorations firstname.lastname@example.org explorations.tamu.edu facebook.com/explorationstexasam twitter.com/ExplorationsUGR soundcloud.com/explorationstexasam 114 Henderson Hall 4233 TAMU College Station, TX 77843 USA Copyright 2013 Texas A&M University All Rights Reserved
Fall 2013 | Volume 5 Student EDITORIAL Board Annabelle Aymond MaryBeth Benda Callie Cheatham Aaron Griffin Samir Lakdawala William Linz Madeline Matthews Matthew McMahon Hilary Porter Bobbie Roth Masthead Design Andrea Roberts Page Layout/Design Annabelle Aymond Manuscript Editor Gabe Waggoner Cover Art Amber Shamma Faculty Reviewers Nancy Amato Juan-Carlos Baltazar Sarah Bednarz Alexey Belyanin Stephen Caffey Kevin Cummings Ashley Currier Dick Davison Jose Fernandez-Solis John Ford Carlos Gonzalez Thomas Green John Hanson Rodney Hill Rafael Lara-Alecio Janet McCann Christopher Menzel Rita Moyes Roger Schultz Adam Seipp Alex Sprintson Susan Stabile Manuelita Ureta Kimberly Vannest Bob Webb Takashi Yamauchi Faculty/Staff Advisory Board Dr. Sumana Datta Dr. Larry Griffing Dr. Duncan MacKenzie Ms. Tammis Sherman Dr. Elizabeth Tebeaux
A L e tte r from Pr e s ide n t Loftin Dear Readers of Explorations, Texas A&M is a very special place. I realized shortly after arriving as a freshman physics major in 1967 that this university offers students the best of both worlds: a culture that makes our campus feel small enough so that strangers quickly become friends, but big enough to attract some of the world’s top researchers. Not only that, but I could work with many of my professors to conduct major research even as an undergraduate. I did not realize at the time how extraordinary those opportunities were. Looking back, I can state without question that working on important research projects with faculty members as mentors gave me an advantage over graduates of other universities and helped launch my professional career. When I returned to Aggieland as President many years later, I was pleased to find that while a great deal had changed since I was a student, Texas A&M continues to stand out among other top universities. We are now one of the nation’s largest universities, offering more than 120 undergraduate degrees and 240 graduate degrees, but the Aggie Spirit is stronger than ever. We have many more professors today, and they include some of the world’s best teachers, scholars and researchers. They are passionate about what they do and dedicated to helping you succeed. I urge you to take advantage of the many opportunities available to get to know your professors and work alongside them in the laboratory, in the field, or wherever this scholarship leads. Make the most of your time at this special place. Gig ’em!
R. Bowen Loftin ’71
Fall 2013 | Explorations
Table of Page 1 Water, Chemical Additives, and Their Effect on Shale By Matthew Wiese
Page 11 Synthetic Jet Fuels Produced from Natural Gas By Moiz Bohra and Asma Sadia
Page 21 Mother Nature and the Coming Storm By Rosa Bañuelos
Natural gas is an invaluable natural resource but recovery may take a large environmental toll.
Studies on the Qatar campus in collaboration with industry are investigating the potential of gasto-liquid based jet fuels.
Emotion, color, contrast, fantasy and strength provide the inspiration for our effect on Mother Nature.
Page 14 Chasing the Sun By Stephen O’Shea
Page 23 “Two if By Sea”: Modern Archaeological Research into Arrival of the First Americans By Thomas Colvin
Page 4 Bottlenose Dolphins and Boat Traffic in the Galveston Ship Channel By Anna Pennachi Heavy vessel traffic can increase the risk of dolphin habitat disruption, behavior, or physical harm by boat propellers.
Page 8 Developing Minimally-Invasive Biosensors from Fluorescent Dye and Red Blood Cells By Megan Poorman Blood analysis is invaluable in determining a patient’s health but can be invasive unless an alternative can be found.
Explorations | Fall 2013
Creative non-fiction based on interviews with veterans from the Iraq and Afghanistan wars.
How and when did the first Americans settle in the Americas? The theory of coastal migration is gaining traction.
Page 17 Cap-and-Trade and Global Compromise By Phillip Warren and Mariah Lord
Page 29 Mapping Subsurfaces with Marbles and Wrapping Paper By Andrew DeCheck
In an effort to limit the effects of man-made climate change, policy-oriented solutions delve into a theoretical look at the necessary cuts in carbon emissions.
The task was simple. Create a method of teaching seismology to middle school kids using everyday objects.
Contents Page 32 Aristotleâ€™s Poetics as a Framework for Engineering Design By Justin Montgomery
Page 41 Only Human By Peter Wong
Page 47 Fungus Among Us: Hitting a Moving Target By Lauren Puckett
To reconcile novel engineering approaches with a traditional method from Aristotle.
The frailty and potential of human life as evoked by music.
Watching fungal reproduction to gain insights for disease treatment.
Page 35 Smart Materials for Aneurysm Treatment By Jason Szafron
Page 43 Why the Bow Tie? By Madeline Matthews and Matthew McMahon
Page 50 Using Mutant Mice to Understand Seizures By Vivek Karun
A novel treatment for aneurysms aims to resolve the shortcomings of the most current treatment options.
An interview with Dr. Loftin
Leaner and tottering mice carry a unique mutation that makes them a possible model for human seizures.
Page 38 Genetic Factors Associated with Coat Color and Health in White Tigers By Sara Carney Do white tigers have to be unhealthy or can we breed them to be both healthy and magnificent?
Ever wonder why President Loftin wears bow ties? Find out all about it!
Page 45 Walking a Fine Line By Sara Muldoon What is reality? Musing about what photography captures and what it doesnâ€™t.
Page 53 The Pop-Op Morphing Wall: A Fusion of Engineering and Ar t By William Whitten Using engineering technology to create art that moves.
Fall 2013 | Explorations
Water, Chemical Additives, and Their Effect on Shale
Introduction What would the world look like without cars, cell phones, computers, and other conveniences that seem so fundamental to life? These examples may seem only distantly connected, but they share a crucial bond: each depends on some form of petroleum to be created or for proper functioning. Today, every country that competes globally relies on petroleum to keep society moving forward. In recent years, increases in worldwide energy demand and decreases in proven conventional reserves have caused the focus of the oil and natural gas industry to shift rapidly. Companies are now exploiting deeper and more dangerous offshore resources to supply the worldâ€™s continuously growing energy needs. Although these resources offer enticing rewards if properly tapped, they also pose greater risk. The tiniest mistakes in deepwater operations can lead to environmental disasters. Though the chance of these incidents occurring is often low thanks to stringent safety standards, the risk will always remain when drilling in the unforgiving environments where these reserves are located. Considering the risks associated with deepwater development, the industry has begun to rapidly develop a less conventional method of petroleum extraction. The recent combination of two established but independent oil field operations, horizontal drilling and hydraulic fracturing, has allowed 1
Explorations | Fall 2013
Shale gas has the potential to be a prominent source of clean burning natural gas for the future. However, there are environmental concerns related to the imbibition of water by shale rock. Several chemicals, such as Aerogel, may be able to decrease the amount of imbibition when applied to shale rendering shale gas production more environmentally friendly.
By Matthew Wiese
companies to produce clean-burning natural gas from safer onshore sources made up of a type of rock called shale. Background Natural gas resources from shale are only commercially accessible now that hydraulic fracturing and horizontal drilling are being used together. Horizontal drilling allows operators to drill along the length of underground rock formations, situated relatively parallel to the surface of the earth. Hydraulic fracturing methods are executed by pumping enormous volumes of water at high pressures, which breaks subsurface rock and creates flow paths known as fractures that are fractions of an inch wide and hundreds of feet long. After fracturing, a large amount of petroleum previously trapped in rock underground has a clear path to a well where it can be produced.1 In the 1980s and 1990s, Aggie petroleum engineer George P. Mitchell and his company pioneered the use of hydraulic fracturing and horizontal drilling together, giving the petroleum industry the ability to economically extract shale gas resources. Also known as an unconventional resource, shale is already changing the world energy market. Also, a large amount of available reserves coupled with an already extensive energy infrastructure has positioned the United States at the forefront of the economic and technological development associated with natural gas resources produced from shale.
Problem With all their promise, these new resources pose challenges in extraction. Using fluids that contain water can cause two major negative interactions with shale: swelling and sloughing. In the presence of water, some types of shale are absorptive, like a sponge or wick, which causes the rock to swell as water is added. The affected shale can then slough, which means it will fall from the walls of a drilled well. When
â€œToday, eve that competes on petroleum to moving fo drilling occurs, the sloughed rock can fill in the borehole and cause several problems.2 The fluid used in most shale gas fracturing operations consists primarily of water; sometimes it makes up at least 98% of the total fluid used. During fracturing, an incredibly large amount of this water-based fluid is pumped into direct contact with shale rock
at the bottom of a well. One area of Pennsylvania has wells fractured in 16 different sections called stages, with each stage taking approximately 225,000 gallons of water. As many as eight wells can exist on a well site, and each field has many sites. Therefore, large amounts of water are now being forced into rock formations known to react in possibly adverse ways when the two come in contact.
used in similar applications inside and outside the oil industry. This criterion left a large range of possible candidates for testing.
This concept has led industry researchers to look at how absorbed fluid used to fracture shale might affect natural gas production. Also, leaving these high water volumes in the ground raises questions about how the hydraulic fracturing fluid will affect shale formations after fracturing is complete.3 Such considerations have generated two schools of thought on fracturing fluid absorption by shale formations. Some researchers are considering that preventing or minimizing fluid absorption by subsurface shale is the proper course of action when dealing with a formation after a fracturing operation. These researchers believe that returning the most fracturing fluid possible minimizes fluid volumes that shale formations absorb and decreases the overall environmental
ery country globally relies o keep society orward.” impact of fracturing. Although decreased absorption aligns with a more environmentally friendly viewpoint for hydraulic fracturing, the impact of increased water return on long-term natural gas production from a well is unknown. Other industry experts theorize that water-based fracturing fluid does not harm the environment when subsur-
Figure 1: Experimental Setup
face shale absorbs it. These researchers are now trying to increase the amount of fracturing fluid that shale absorbs after a hydraulic fracturing operation. By doing this, they can decrease the cost of disposing of water returned to the surface after fracturing and possibly increase natural gas production over the life of a well. However, whereas the oil industry sees this lack of water regression as a reduction in disposal costs, environmentalists see this decrease in return as loss of a vital resource. These general viewpoints are the basis for much private and academic research. Proposed Solution To clarify the absorptive properties of shale, my adviser and I started with the general goal of identifying different chemical additives and observing how they affected shale’s affinity to absorb water-based fluids. Nearly all types of shale are negatively charged at the surface of their constituent minerals. Because water molecules have a slight positive charge at one end, attraction exists between water and shale that pulls water molecules into pores in the rock. In the petroleum industry, such fluid absorption into rock is called imbibition. The ability to influence imbibition by using chemical additives served as the initial basis for our research. We began by identifying substances we thought would either limit or increase imbibition by considering chemicals
First, we considered chemicals designed to alter the interactions between surfaces and fluids. A simple example is hand soap. Using water alone on your hands does not clean them as well as a mixture of water and soap. With just water, the water molecules tend to stick to themselves instead of entering your pores to clean dirt and remove skin oils and bacteria. By using hand soap, you break the cohesive tendency of water and allow it to flow into your pores and clean them out. Hand soaps are surfactants, chemicals that affect the interaction of fluids with a surface or another fluid. These surface-modifying characteristics led to the selection of the first two chemicals for testing, a negatively charged and a positively charged surfactant. These two surfactants are expected to increase water absorption into shale samples.4 We identified the next chemical with the intention of limiting imbibition. We needed a chemical that could decrease water’s tendency to enter pores on the face of shale exposed to an aqueous solution. Analysis of current research showed that a class of chemical is already being tested that tends to prevent adverse shale behavior in other applications. Researchers have added nanoparticles (spherical particles approximately 10–9 m, or one-billionth of a meter, across) to water-based drilling fluid; results indicate that these tiny particles might be a viable way to minimize fluid imbibition.5 Results from recently published work supported the viability of nanoparticles as a possible solution to our problem.5 These two basic classes of chemical along with other accepted industry conventions for affecting fluid imbibition into shale served as the basis for our research group’s study. More additives, such as microemulsions and additional polymers, have been identified as possible test subjects and are being evaluated inside and outside the petroleum industry.6,7 Past, Current, and Future Work The project’s first objective beyond identifying candidate chemicals was to Fall 2013 | Explorations
a jack raises a beaker full of water containing the additive to be tested until it touches one face of the shale sample. At this point, the sample begins to absorb the solution. Over 24 hours, a computer continuously records mass readings. The change in mass indicates how much fluid has imbibed. We can then determine the degree to which each chemical minimizes or increases imbibition by comparing results with experiments executed with pure water.
Figure 2: Shale sample in contact with solution
develop a method for experimentation. Figures 1 and 2 show the functional setup. We obtained a balance that can continuously weigh samples suspended beneath it. We use this capability to hang a cylindrical sample of shale beneath the balance, which sits on a specially constructed mezzanine. Then, References 1. King GE. Hydraulic fracturing 101: What every representative, environmentalist, regulator, reporter, investor, university researcher, neighbor and engineer should know about estimating frac risk and improving frac performance in unconventional gas and oil wells. Presented at the 2012 SPE Hydraulic Fracturing Technology Conference. SPE152596-MS. 2. Civan F. Water sensitivity and swelling characteristics of petroleum-bearing formations: Kinetics and correlation. Presented at the 2001 SPE Production and Operations Symposium. 00067293. 3. Brannon HD, Daulton DJ, Hudson HG, et al. The quest to exclusive use of environmentally responsible fracturing products and systems. Presented at the 2012 SPE Hydraulic Fracturing 3
Explorations | Fall 2013
The immediate future direction for this project is to begin experimenting with chemicals selected as test candidates. Experiments with identified chemicals will be completed in conjunction with trials using potassium chloride salt as a water additiveâ€”already an accepted industry convention for limiting solution imbibition into shale formations.8 Having these data will allow us to compare the results from potentially expensive chemicals with those of relatively cheap industry standards. By doing this, we will be better able to analyze the financial aspects of adding any new chemicals in an actual fracturing operation. Any chemical added to a fracturing fluid mixture, even in a small quantity, can substantially increase the price of an operation because of the large fluid volumes used. How economic adding these new chemicals is depends on Technology Conference. SPE152068-MS. 4. Lane R, Aderibigbe A. Rock/ fluid chemistry impacts on shale fracture behavior. Presented at the 2013 SPE International Symposium on Oilfield Chemistry. SPE-164102-MS. 5. Hoelscher KP, Stefano GD, Riley M, et al. Application of nanotechnology in drilling fluids. Presented at the 2012 SPE International Oilfield Nanotechnology Conference. SPE-157031-MS. 6. Penny GS, Dobkins TA, Pursley JT. Field study of completion fluids to enhance gas production in the Barnett shale. Presented at the 2006 SPE Gas Technology Symposium. SPE-100434-MS. 7. Wu Q, Sun Y, Zhang H., et al. Experimental study of friction reducer flows in microfracture during slickwater fracturing. Presented at the 2013 SPE International Symposium on Oilfield Chemistry. SPE-164053-MS.
how much their effect on imbibition might increase overall natural gas recovery from a well. In addition to additive analysis, another dynamic affects the future of this project: shale properties vary greatly from formation to formation.9 Shale samples from different parts of the world will not react to a certain chemical in the same way. In light of this, we will obtain rock samples that parallel the characteristics of the most highly productive regions in shale gas development and then retest each additive to gain results specific to each area. Much remains for us to learn. Because little research that directly parallels these experiments has taken place, we lack an established, clear-cut scientific base to guide the project forward in our exact application. Although this task may seem daunting, its general objectives encompass a research area with great potential. Through this interdisciplinary initiative supported by private and public sectors, we hope to establish a better understanding of shaleâ€“fluid interactions. These results could lead to increased natural gas production averages industry-wide and the mitigation of environmental impact from hydraulic fracturing operations in the future. 8. Carminati S, Gaudio LD, Zausa F, et al. How do anions in waterbased muds affect shale stability? Presented at the 1999 SPE International Symposium on Oilfield Chemistry. 00050712. 9. Jacobi DJ, Gladkikh M, LeCompte B, et al. Integrated petrophysical evaluation of shale gas reservoirs. Presented at the CIPC/SPE Gas Technology Symposium 2008 Joint Conference. SPE-114925-MS.
Bottlenose Dolphins and Boat Traffic in the Galveston Ship Channel By Anna Pennacchi Current research indicates that, despite the number of ships in the Galveston Ship Channel, the population of bottlenose dolphins living in the Channel is relatively unperturbed. However, with the expansion of the Panama Canal which will lead to increase ship traffic, it is likely that the behavior of bottlenose dolphins in the Galveston Ship Channel may change as their social patterns are at risk. Monitors are in place to observe adaptations as they develop. Introduction Riding the wake of the Bolivar Ferry, swimming under the Pelican Island Bridge, and frolicking in Texas A&M University at Galveston’s (TAMUG) boat basin are a few activities associated with dolphins in Galveston Bay (Figure 1). A population of common bottlenose dolphins (Tursiops truncatus) inhabits the ship channel behind TAMUG. This is surprising considering the heavy boat traffic associated with the Galveston and Houston Ship Channels. Cargo ships, tankers, tugs, barges, cruise ships, trawlers, and recreational boats use these waterways daily. Dolphins probably remain in the busy Galveston Ship Channel for the abundant food supply in its deep, dredged waters. The thriving shrimp population in Galveston Bay serves as an important food source for local dolphins.1 The likelihood of capturing food attracts fishing boats, shrimp trawlers, and cetaceans to shrimp-abundant areas (Figure 2).2 This paradox of dolphins’ being attracted to a busy ship channel initially sparked my interest to study the dolphins in Galveston. TAMUG offers the opportunity to study marine mammal behavior from the convenience of campus. Marine mammal researchers travel the world to learn about populations and habitats such as those in Galveston. Previous studies have found that Galveston dolphins appeared to be part of an open population, were present year-round with peaks in spring and fall, and exhibited what appeared to be scars resulting from human interactions.1 However, to my knowledge, whether the heavy boat traffic associated with the Galveston and Houston Ship Channels affects dolphin behavior has not been extensively studied. The Galveston Ship Channel is an easily studied area where results will have broader applications to Galveston Bay and Houston Ship Channel, making it an important area to assess how development affects wildlife conditions. An estimated $5.25 billion expansion of the Panama Canal is expected to be completed in 2015 and will greatly affect the Galveston–Houston complex. The Panama Canal links trade between the Atlantic and Pacific Oceans. Though this trade route has been extremely successful since its opening
Figure 1: A mother and calf pair bow-riding in the wake of a ship in Galveston Bay.
Figure 2: Dolphin foraging behind a shrimp trawler in the Galveston Ship Channel.
in 1914, its current size cannot effectively accommodate the number and size of modern cargo ships, some of which require a depth of 50 feet (http://www.portofhouston.com/ about-us/overview/). Houston, having a channel depth of 45 feet, would be able to accommodate these massive ships only at high tide or when the ships are not at carrying Fall 2013 | Explorations
capacity. Consequently, expansion and accommodation projects are under way in the Houston and Galveston Bay ports to facilitate transport of more and larger ships (http:// www.portofhouston.com/about-us/overview/). Researchers expect the increase in dredging associated with expansion construction to have economic and environmental effects on Galveston Bay, including raising water salinity and increasing air pollution, which could negatively affect wildlife in the area, including dolphins. Because of the large volume of recreational, commercial, and industrial boat traffic in the Galveston Ship Channel, assessing daily patterns of dolphins, boats, and boat types is crucial. A formal study of how boats affect dolphin behaviors in the Galveston Ship Channel is especially important at a time when both boat traffic and boat size are expected to grow. An increased understanding of what behavioral changes, if any, various types of boats elicit from the local bottlenose dolphin population is necessary before expansion of the Galveston Ship Channel. Mitigation protocols could be implemented once a baseline level of behavioral responses to vessel traffic has been established. I predicted a decrease in the presence of dolphins with increasing boat traffic. I also predicted a decrease in social behavior and an increase in traveling behavior as vessel traffic increased, as noted in previous studies of short-term behavioral shifts of dolphins during periods of heavy vessel traffic.3 Methods We conducted hour-long surveys from a shore-based station along the Galveston Ship Channel three times daily between 8:00 and 9:00 (morning), 12:45 and 2:45 (midday), and 4:00 and 6:00 (afternoon) in Augustâ€“December 2012. The observation pier is conveniently located directly behind TAMUG and extends into the channel, offering an unobstructed view of the dolphins and boats passing by (Figure 3). The pier is adjacent to the TAMUG boat basin, and students and Galveston locals use the pier for flounder fishing in cool weather. Shore-based observations do not interfere with dolphin behavior, whereas a research vessel potentially could. However, every Friday, researchers in the Marine Mammal Group at TAMUG also took a small research boat out on the Galveston Ship Channel and into Galveston Bay to collect supplemental data that our shorebased station might not detect. Table 1: Behavioral Categories Category
Individuals swimming in one general direction
Group touching, splashing, mating, or leaping out of water
Searching for or ingesting prey, indicated by long dives and sometimes flukes out of water
Swimming slowly with slow surfacing, typically in one general direction
Unknown Unable to 2013 clearly identify behavior 5 Explorations | Fall
Observation Pier 0.7km
Figure 3: Study area in the Galveston Ship Channel. TAMUG is denoted by the universityâ€™s logo. Yellow arrow identifies the observation pier where I observed dolphins and boats during hour-long focal follows. Black lines indicate my line of vision, approximately 0.5 km2 in area. Red line indicates the 0.7-km width of the channel (per Google Maps). Each survey recorded dolphin group size, number of boats, type of boats, and predominant behavior of dolphin groups. I collected data when dolphins and boats entered my line of vision, and I followed them until either they left my line of vision or my 1-hour observation ended. I defined a group of dolphins as one or more individuals engaged in the same behaviors within my line of vision. I categorized the behaviors into five groups: traveling, socializing, foraging, resting, and unknown (Table 1). I divided boats into two categories: industrial and nonindustrial. Industrial boats included cargo ships, tankers, barges, and tugs. Nonindustrial boats included research vessels and recreational boats, such as sailboats, speedboats, fishing boats, yachts, and kayaks. I used Microsoft Excel 2010 to perform statistical analyses. I ran chi-square contingency tests to assess changes in behavior of dolphin groups in relation to group size, time of day, and number of boats. I also used contingency tests to determine the relationship between boat type and dolphin behavior, as well as time of day and vessel traffic. Results I saw dolphins on 98% of the survey days. Of the 80 groups recorded, I observed 555 dolphins. Dolphin group sizes ranged from 1 to 32 individuals, with an average group size of 7. Boats were present 86% of the time dolphins were recorded. Foraging was the most common behavior observed in the Galveston Ship Channel. Chi-square contingency tests indicated that dolphin behaviors varied statistically significantly in relation to number of boats. As the number of boats increased, the number of dolphin groups exhibiting traveling behaviors progressively decreased (Figure 4). The number of dolphin groups exhibiting foraging behavior significantly decreased as the number of boats increased (Figure 4). Socializing behavior was significantly highest with an intermediate number of boats and was lowest with a small number of boats (Figure 4).
Distribution Frequency of Dolphin Behaviors Relative to the Number of Boats Number of Dolphin Groups
25 20 Traveling
2--4 Number of Boats
Number of Dolphin Groups
Figure 4: Dolphin behaviors varied statistically significantly in relation to boat traffic.
Distribution of Dolphin Behaviors Relative to Group Size
20 18 16 14 12 10 8 6 4 2 0
Group Size Figure 5: Dolphin behaviors varied statistically significantly in relation to dolphin group size. Socializing behavior significantly increased and foraging behavior significantly decreased with increasing group size.
Distribution of Boat Types Relative to Time of Day
Number of Boats
300 250 200
100 50 0
Midday Time of Day
The type of boat varied statistically significantly with time of day. The number of industrial boats increased significantly in the afternoon, whereas the number of nonindustrial boats was lowest in the morning, peaked midday, and decreased in the afternoon (Figure 6). Most important, size of dolphin groups varied statistically significantly in relation to boat type. I found significantly smaller groups in the presence of industrial boats than with nonindustrial boats (Figure 7). I saw no dolphins when the highest number of nonindustrial and industrial boats was present (Figure 7). Conclusion
Dolphin behaviors varied statistically significantly in relation to group size. Foraging behavior significantly decreased as group size increased, whereas socializing behavior increased significantly with increasing group size (Figure 5). Traveling behavior was significantly higher with smaller group sizes (Figure 5).
Figure 6: Distribution frequency of boat type. Blue = nonindustrial boats, including sailboats, speedboats, fishing boats, trawlers, yachts, and kayaks. Red = industrial boats, including container ships, barges, and tugboats. The three times of day are morning (8:00–9:00), midday (12:45–1:45), and afternoon (4:00–6:00).
The high rate of common bottlenose dolphin sightings from a stationary position during the study period (98% of days) indicates that dolphins frequently use the Galveston Ship Channel, and it supports the need to understand their behavioral responses to boats because expansion of the Panama Canal will increase boat traffic. Whereas the Galveston Ship Channel makes up less than 1% of the entire Galveston Bay, dolphin presence and foraging behavior are higher in the Galveston Ship Channel than other areas of Galveston Bay. This finding indicates that food availability may be more important to dolphins in habitatuse decisions than other physical environment factors, such as isolation from industrial development (http://www.galvestonpilots.com/ HOGANSACSharingourbay.pdf). I observed foraging behavior most often when few boats were present, suggesting a decrease in net foraging and lowered activity budgets in the presence of many boats. Socializing behavior may be more a function of group size than of number of boats. My results were consistent with those of a previous study that found that the largest groups exhibited social behavior and the smallest groups exhibited foraging behavior in the Galveston Ship Channel.1 Coastal and confined water systems, such as ship channels, typically supply a predictable and consistent food source, which may indicate why smaller groups have been observed foraging in the Galveston Ship Channel.1 The short-term disruption of critical behaviors (foraging, resting, and socializing) can lead to a long-term overall reduction in critical behavior, which is potentially harmful to dolphin fitness.4,5 Fall 2013 | Explorations
Number of Boats
Distribution of Dolphin Group Sizes Relative to Boat Type
350 300 250
150 100 50 0
1--4 Dolphin Group Size
Figure 7: Distribution frequency of dolphin group size (0, 1–4, and ≥5) in relation to boat type (blue = nonindustrial boats, including sailboats, speedboats, fishing boats, trawlers, yachts, and kayaks; red = industrial boats, including container ships, barges, and tugboats).
I found smaller groups in the presence of industrial boats than with nonindustrial boats, possibly owing to increased group cohesion. Previous studies found that bottlenose dolphins increase group cohesion, possibly for enhanced cooperation, in the presence of boats.3,6 Knowledge of a dolphin population’s daily patterns can be useful for suggesting regulations when human activities begin to disturb baseline dolphin behaviors. Although dolphin behaviors did not change with time of day, boat type varied. My data indicate that the highest boat traffic occurs in the afternoon in the Galveston Ship Channel, representing an important consideration for amendments to future boat regulations. In previous studies and from personal boat-based observations in Galveston Bay, foraging behavior was highest in the morning.7 Although foraging behavior was not highest in the morning at my study site, I do not know whether most dolphins are foraging statistically significantly more in the morning farther down the ship channel and in Galveston Bay. Further assessment of daily patterns in the Galveston Ship Channel should incorporate boat-based observations to determine whether foraging behavior increases in the morning in relation to shrimp trawlers and decreases in the afternoon in relation to increased ship traffic. I observed no dolphins when the highest number of boats was present. This statistically significant finding confirms my prediction that boat traffic influences dolphin presence. Because boat traffic will undoubtedly increase with expansion to accommodate post-Panamax vessels, I recommend low-intensity monitoring of the dolphins in Galveston Bay and the Galveston Ship Channel, with special attention to behavioral responses to ships. Future studies of bottlenose dolphin behaviors in the Galveston Ship Channel would benefit from collecting data across seasons to increase understanding of how dolphins use ship channels year-round. Overall, the dolphins in the Galveston Ship Channel show signs of adapting to the high boat traffic. The consistency of dolphin sightings and exhibition of foraging and socializing behaviors indicate that dolphins are thriving 7
Explorations | Fall 2013
in this unusual habitat. Though the dolphins in Galveston appear relatively unperturbed by boat traffic in the past and present, I cannot assume that this will be the case in the future, especially with the upcoming changes. Future studies that will reassess the idea of an open population consisting of resident and transient dolphins in the Galveston Ship Channel will be able to determine dolphin site fidelity before, during, and after the expansion of the Panama Canal. Although monitoring short-term behaviors is time and cost efficient, long-term studies should be implemented in Galveston Bay. Acknowledgments I thank Dr. Bernd Würsig, Dara Orbach, Sarah Piwetz, and Ashley Zander of the Marine Mammal Research Program at TAMUG. I also thank the Texas Institute of Oceanography and the Texas A&M University Honors Program. References 1. Fertl DC. Occurrence patterns and behavior of bottlenose dolphins (Tursiops truncatus) in the Galveston Ship Channel, Texas. Texas Journal of Science 1994;46:299–317. 2. Fertl D, Leatherwood S. Cetacean interactions with trawls: A preliminary review. Journal of Northwest Atlantic Fishery Science 1997;22:219–248 3. Nowacek SM, Wells RS, Solow AR. Short-term effects of boat traffic on bottlenose dolphins, Tursiops truncatus, in Sarasota Bay, Florida. Marine Mammal Science 2001;17:673–688. 4. Lusseau D, Higham JE. Managing the impacts of dolphin-based tourism through the definition of critical habitats: The case of bottlenose dolphins (Tursiops spp.) in Doubtful Sound, New Zealand. Tourism Management 2004;25:657–667. 5. Steckenreuter A, Möller L, Harcourt R. How does Australia’s largest dolphin-watching industry affect the behavior of a small and resident population of Indo-Pacific bottlenose dolphins? Journal of Environmental Management 2012;97:14–21. 6. Bejder L, Dawson SM, Harraway JA. Responses by Hector’s dolphins to boats and swimmers in Porpoise Bay, New Zealand. Marine Mammal Science 1999;15:738–750. 7. Henderson EE. Behavior, association patterns, and habitat use of a small community of bottlenose dolphins in San Luis Pass, Texas. M.S. thesis, Texas A&M University, College Station, Texas, 2004.
Developing Minimally Invasive Biosensors from Fluorescent Dye and Red Blood Cells
blood glucose levels via a finger prick test 5–10 times per day. Doing this every day over a lifetime is not only time consuming and painful but also causes inflammation and calluses to form where the blood is drawn. These calluses and inflammatory reactions make drawing blood even more difficult, causing further complications. Thus, a minimally invasive alternative to the standard finger-prick method is needed that can continuously monitor changes within the body and cause no harm to the patient. Minimally invasive implantable biosensors could be the solution. Because of problems with biofouling (the body’s natural immune response to attack any foreign object), researchers have turned their search toward modifying sensors to be more biocompatible. However, evading the immune response of the body is more complex than simply making a sensor more biocompatible, and issues remain with implanted devices losing functionality over the long term. This is where creative engineering comes into play. If the body will reject anything foreign, why not make a sensor that is made out of the body? My research focuses on placing the sensor chemistry inside red blood cells (RBCs), hiding the foreign object from the immune system behind a façade of the patient’s own body.1 Concept
By Megan Poorman Blood analysis has relied on the collecting and drawing of blood samples from a patient, which can be tedious or painful to the patient. Implanted sensors would be less invasive, but they may be rejected as foreign by the body’s immune system. By enclosing a sensor within a red blood cell, it is in an immune protected environment. pH can be measured non-invasively by using fluorescein as a sensor within red blood cells. This may be a model to be extended to other analytes.
The ideal biosensor would have access to the substance it is trying to measure, could be manufactured inexpensively, and would last a long time in the body. RBCs meet these criteria. These cells have a simple structure that could be easily manipulated to carry sensing chemistry within the cytoplasm, are available in mass quantity within the body, and are always in direct contact with blood plasma. RBC membranes are also permeable to many blood analytes and molecules, which would enable any sensing chemistry within to directly measure blood analyte levels. The medical field already has the equipment and procedures needed to extract and manipulate these cells, making creation of RBC sensors straightforward. A patient who needed to have biosensors implanted would undergo a process similar to donating blood where blood is removed through an intravenous tube. Cells could then be isolated, converted into sensors, and returned to the bloodstream in about an hour. The RBC sensors would last in the body as long as a normal RBC does, about 120 days.2 Implementation
Background For physicians, blood analysis is invaluable in determining a patient’s health. One analysis can yield information about blood analyte levels (such as glucose in diabetics), organ function, and disease progression. However, obtaining these readings requires drawing blood from the patient, which can be invasive and painful, and—unless repeated—offers no insight into trends in analyte levels. Although these side effects may be tolerable for an annual doctor visit, they can lower quality of life for patients who require blood analysis more often. For example, diabetic patients are advised to measure their
The key to using RBCs as sensors lies in properly encapsulating the sensing chemistry within the cells. My project uses fluorescein-based fluorescent dyes as the sensing molecules. When excited with a laser, the dye will emit light in direct proportion to the pH of the surrounding environment. Because RBC membranes are permeable to hydrogen ions, which are involved in determining pH, the dye encapsulated in an RBC can respond to changes in pH outside the cell. By shining light of a certain wavelength on cells containing the dye and measuring the intensity of light emitted, one can obtain an
“Doing this every day over
a lifetime is not only time consuming and painful…”
Fall 2013 | Explorations
is where creative engineering comes into play.â€? Biosensor Engineering Various ways exist to load RBCs with fluorescent dye, some more efficient at creating stable and homogeneous sensors than others. Most methods use an osmotic approach, during which the concentration of ions in the extracellular environment is decreased to cause the RBCs to take up water and swell (Figure 2). The cells are swollen to a point just before bursting when small pores form in the membrane. Through these pores fluorescent dye can
Emission spectra obtained from sensors at different pH values 7000 6000 5000
For medical use, a pH-sensitive dye is perhaps not the most applicable, but the concept could easily be applied to a fluorescent glucose-sensitive dye for use in diabetic patients. Such sensors could be implanted in the bloodstream to measure blood glucose levels. The user would simply use a calibrated device to shine light on an area of the patientâ€™s body where the skin is thin, such as the inside of the wrist, and obtain a reading based on the light that the device collects. This reading could then be displayed on the device, stored for tracking, or sent to a remote location for analysis. This remote-sensing ability would save time and money by allowing blood tests to be performed quickly.
4000 3000 2000 1000 0
600 800 Wavelength (nm)
Maximum intensity of light from sensors at different pH values
B Relative Maximum Intensity
accurate reading of the environmental pH (Figure 1).
Figure 1: (A) Emission spectra obtained from RBC sensors loaded with dye in different pH environments demonstrating response of sensor to changes in pH. (B) Maximum intensities of spectra from Figure 1A plotted against pH of the environment; change in intensity of sensors in response to change in pH can be recorded and used to calibrate the sensor.
diffuse into the cell. The ion concentration is then adjusted to return cells to their original size, sealing the dye inside the cell. Figure 3 shows biosensors created using an osmotic technique called hypotonic preswelling. This method again uses pores to load dye into the cells, but the approach better preserves cell characteristics such as membrane stability and hemoglobin content than other methods. Preservation of these characteristics makes the cells more likely to have a normal life span and function as normal RBCs in the body. These techniques have demonstrated the viability of RBCencapsulated sensors. A challenge remains, however, in increasing the efficiency of ways to create the sensors. Figure 3 shows that although many RBC sensors are present in a typical sample, only a few respond with fluorescence. Thus, a new method is needed that is more efficient at encapsulating dye. I am extending my research to develop a new loading method that uses cell-penetrating peptides (short protein fragments that can directly penetrate a cell membrane) that could be more efficient at creating sensors than osmotic-based methods.3 Figure 2: Osmotic approach to loading RBCs with dye. An RBC is swollen until lysis pores form, through which hemoglobin and dye can diffuse. Once dye is inside the cell, the cell returns to its original volume, encapsulating the dye.
Explorations | Fall 2013
Impact The initial results using RBC sensors made from fluorescein are promising, but many advances must be made before these sensors are ready for use in the clinic. These RBC
sensors have great potential to circumvent many problems associated with existing implantable biosensors. They would be completely biocompatible, provoking no immune response within the body and preserving the ability of the sensor to function. The sensing method would be minimally invasive, allowing patients to avoid the pain and complications associated with drawing blood repeatedly. RBC sensors, once implemented, could greatly improve the quality of health care by eliminating unwanted side effects of some treatments and making health care more accessible daily. Incorporated into point-of-care testing platforms (systems that bring the diagnosis and medical care to the patient instead of the doctor’s office), an RBC sensor system could remotely deliver a real-time assessment of a patient’s health to a physician and track data over time, greatly improving speed and quality of diagnosis.
Acknowledgments I thank Dr. Kenith Meissner and doctoral candidate Sarah Ritter in the Department of Biomedical Engineering for their mentorship and guidance. I also thank Dr. JeanPhilippe Pellois and doctoral candidate Kristina Najjar in the Department of Biochemistry for help with cellpenetrating peptides. References 1. Ritter SC, Milanick MA, Meissner KE. Encapsulation of FITC to monitor extracellular pH: A step towards the development of red blood cells as circulating blood analyte biosensors. Biomedical Optics Express 2011;2:2012–2021. doi:10.1364/BOE.2.002012. 2. Hamidi M, Tajerzadeh H. Carrier erythrocytes: An overview. Drug Delivery 2003;10:9–20. 3. Kwon Y, Chung H, Moon C, et al. l-Asparaginase encapsulated intact erythrocytes for treatment of acute lymphoblastic leukemia (ALL). Journal of Controlled Release 2009;139:182–189. doi:10.1016/j. jconrel.2009.06.027.
Figure 3: Two views of a single sample of RBC sensors created using the hypotonic preswelling method imaged with (A) phase microscopy and (B) fluorescence microscopy.
Fall 2013 | Explorations
Synthetic Jet Fuels Produced from Natural Gas
By Moiz Bohra and Asma Sadia
As the price (and potential environmental problem) of crude oil rises, searches for a synthetic jet fuel similar to conventional jet fuel, relying on natural gas instead of crude oil, have intensified. Qatar has the third largest natural gas reserve in the world, making the project particularly interesting to Qatar Airways. By examining the physical properties of various blends of aromatic and paraffin compounds, it is hoped that researchers will be able to predict the properties of any blend of synthetic jet fuel reliably. Introduction On January 9, 2013, a Qatar Airways flight flew from Doha to London powered by a 50–50 mixture of conventional oil-based jet fuel and a synthetic fuel derived from natural gas, called synthetic paraffinic kerosene (SPK). This flight had a twofold significance. First, SPK produces lower levels of sulfur oxides, nitrogen oxides, and particulates upon combustion (E. E. Elmalik, B. Raza, S. Warrag, E. Alborzi, and N. O. Elbashir, submitted for publication) than conventional jet fuels, and Qatar Airways seeks these desirable traits in its bid to improve air quality around busy airports. Second, Qatar’s abundant and cheap natural gas resources1 (third largest in the world) enable Qatar to produce economically viable value-added products from raw natural gas. The Shell Oil Company operates the world’s largest gas-to-liquids plant in Qatar and produces SPK, among other products. With the price of crude oil rising, gasderived fuels are increasingly being recognized as a better alternative. Though they are nonrenewable fuels, they burn cleaner and more efficiently, and they are safer to produce. More important, gas-derived fuels are compatible with the current liquid fuel infrastructure; hence no modifications need to be made to aircraft engine designs. The immediate future thus
lies in synthetic fuels that can be produced chemically from sources such as natural gas, which is cheap, is abundant in countries such as Qatar, and can be converted into various value-added products. But Qatar Airways could not increase the ratio of synthetic jet fuel beyond 50%, a long way from the ambitious goal of replacing conventional jet fuel. Synthetic jet fuels lack certain chemical compounds called aromatics that are inherently present in conventional jet fuels. The characteristic smell of gasoline comes from its aromatic content. The fuel system within an airplane consists of pipes with joints sealed by rubber O-rings. Aromatics cause these rings to swell, thus sealing the joints and preventing fuel leaks (Elmalik et al., submitted). Aromatics must be added to synthetic jet fuels to mimic the swelling effects of conventional jet fuel, while still limiting them as much as possible (they are responsible for particulate emissions). Research Question Our broad research question: what is the ideal composition for a synthetic jet fuel that can replace conventional jet fuel? Qatar Airways took the initiative to begin answering this question. The company created a research consortium involving Shell Oil Company, Rolls–Royce,
Figure 1: Understanding isomers: normal hexane, isohexane, cyclohexane, and aromatic benzene. Structural differences lead to differences in physical properties.
Explorations | Fall 2013
the University of Sheffield (United Kingdom), DLR (the German Aerospace Center), and Texas A&M University at Qatar. Researchers at Texas A&M Qatar are involved in the search for optimum synthetic jet fuels that will revolutionize commercial aviation. Our work as undergraduate researchers consists of running characterization experiments on fuel samples, performing statistical analysis on the results, and building a graphic model that highlights the optimum fuel blends. Experimental Activities We carried out the experimental activities for this research at Texas A&M Qatar in the Fuel Characterization Laboratory, an advanced facility designed to support research activities in fuel processing and fuel characterization. The American Society for Testing and Materials (ASTM) specifies limits on the physical properties (e.g., freezing point, energy content, flash point, and density) of a potential jet fuel, and the tests we perform are required for jet fuel certification. Synthetic jet fuels are made up of different types of hydrocarbons (specifically, paraffins). We are studying the effects of isomers, that is, normal paraffins, isoparaffins, cycloparaffins, and aromatics, as the building blocks of synthetic jet fuel. These isomers have different molecular structures as displayed in Figure 1, and these structural differences lead to differences in physical behavior. We prepare blends containing different ratios of these paraffinic and aromatic blocks. The composition of a fuel blend determines its physical
Generated Data Points
No. of Components Matching ASTM
Experimental Data Points
Figure 2: Generating data points from neural network output, then finding region of optimum properties (for threecomponent fuels). Maroon region shows optimum blend region.
properties, such as flash point, freezing point, heat content, density, and viscosity, all of which we measure. The first phase of our project involved preparing 32 fuel blends by combining the normal, iso-, and cycloparaffins (three building blocks in synthetic jet fuel) and studying the effect of hydrocarbon constituents on the described physical properties. The next phase involved preparing 21 new blends with the addition of monoaromatics and studying the effect of aromatics on the physical properties of the fuels. We used statistical analysis tools to build a visualization model of the results. Statistical Analysis and Visualization We used Matlab to analyze the characterization data obtained from the lab. The analysis uses the experimental data points to make a model that can predict the composition (and therefore properties) of any blend of jet fuel (Elmalik et al., submitted). Although linear interpolation between just a few data points can predict properties such as density (which vary linearly with composition), interpolation revealed no experimental observations for properties that showed more nonlinear behavior (e.g., freezing point). The alternative solution was Matlab’s neural network toolbox, which creates a mathematical function that can link the input data (composition of fuel blends) to the target data (physical properties of blends). Thus, the neural network used 35 data points to generate 1,000 new
data points (blend compositions and their associated properties). These new data points served as the basis for a visualization model that could predict the physical properties of a hypothetical blend without the need for new experimental tests. We used Matlab to plot the composition–property trends that the neural network produced. We created an optimized plot that overlaid freezing point, flash point, heat content, density, and viscosity to show the optimum region where the blend compositions met the ASTM standards we were testing (Figure 2). Adding aromatics brought another dimension to the plots. Our research colleagues build 3-D visualization models and highlight regions of special interest (optimum blend region; Figure 3). These plots can be dissected to focus on regions that meet ASTM standards, allowing us to find and prepare optimized synthetic jet fuel blends that could one day replace oilbased jet fuels.
Future Work Our current research interest is to understand the role of aromatics in synthetic jet fuel. We are conducting a new experimental campaign to obtain data for 65 new blends, and we will use the gathered data to refine our 3-D model. We would also like to explore the role of carbon chain length, the effects of various other paraffinic building blocks, and the challenges associated with blending conventional jet fuel with synthetic fuels. These crucial questions need to be answered before replacing oil-derived fuels with synthetic fuels is feasible in commercial aviation. Acknowledgments We thank Dr. Nimir Elbashir, associate professor of chemical engineering, Texas A&M University at Qatar. We also thank Elfatih Elmalik and Rehan Hussain, research associates at Texas A&M University at Qatar. Reference 1. Hulbert M. Qatar plays a strategic LNG game. Middle East Magazine 2012;437:38–39. Further Reading Marien, Michael. World energy outlook 2012. World Future Review 2012;4.4:90–95.
Figure 3: Three-dimensional model for density versus composition (work in progress).
Fall 2013 | Explorations
WARNING: The following story contains explicit language that may be offensive to some audiences. Reader discretion is advised.
Explorations | Fall 2013
By Stephen O’Shea
Chasing the Sun
The narrative of the combat veteran from the Iraq and Afghanistan wars over the past decade has been largely obscured. This is not only because of the lack of media attention and a nationwide numbness of the prolonged conflict but also because of a lack of creative publication. Plenty of reporters have published journalistic accounts of events or experiences, and several veterans themselves have published firsthand memoirs of their tours—but a literary publication to represent the conflict for the general public is lacking. Through the hundreds of sources we have gathered (from news articles to journals and blog posts), as well as our interviews with dozens of combat veterans, I have developed short stories that represent the experiences and themes of the soldiers who have fought for our nation in its most recent wars. These stories, though independent, depict a large scope of the combat and experiences through an interconnected web of scenes and characters in the structure of a short story cycle (the likes of “Dubliners,” by James Joyce, or “In Our Time,” by Ernest Hemingway). They also hope to develop off the evolving style of war narratives following Tim O’Brien and Anthony Swafford. Through this project, I hope to reach the general public to spread awareness of the experiences of our soldiers overseas—to tell their story in a style and medium that will encourage people to listen. This particular story illustrates the tension of daily patrols through the threat of improvised explosive devices (IEDs). I chose to portray this subject through the experiences of a marine in a quick-response team, modeling a story from one of our interviews. The changes I made include the characters, setting, and series of events. The elements I chose to keep consistent with the combat veterans’ telling were the details of both the aftermath of the injury and the emotions and motivations of the average ground-level grunt. The interviews themselves have been incredibly helpful for the production of combat-related stories. Having never experienced the terrain myself, the trauma of the experience, the memory of the events, or recollection of emotions involved in such an experience, drawing from the details of various interviewees allows me to portray a level of verisimilitude otherwise unavailable to the inexperienced author. For instance, in this story I combined several stories from different interviews to encompass a broader portrayal of the IED threat and the tensions of patrol. In accomplishing this, I can reach out to an audience beyond the discriminate class of combat veterans while still telling their story (or a combination of their stories) in one narrative. With the vast array of stories and narratives available from the thousands of veterans, this technique has been helpful in narrowing the scope of my project to the trends and general themes of my overall research.
esse grinds his foot in the sand, holding perimeter around a convoy in full gear. He digs a hole in the dirt, pushing sand out and watching it pour back. Then nudges the earth again.
“Whatcha think, Jess?” asks Pensley. He leans against their armored Bradley, scanning the horizon. “Reckon we’ll make chow?” Jesse shrugs, his M-16 rising in the shade. “They oughta be on their last circle now. Probly pull up dry, but who knows.” The sniper had nicked a grunt of the stranded convoy long ago and would be well hidden now. Vehicles were repaired from the IED blast and night was coming. “If we’re released soon, food might be hot.” Pensley purrs. “Gawd, it’s been a while. Another MRE piece’a plastic shit and I’ll be sharting Skittles.” “Taste the rainbow,” Jesse adds. They hadn’t slept at base, in a bed, for six days. Life with a Quick Reaction Force was subtle suffocation—living out of a Bradley, sleeping in the desert cramped within vehicles, shoveling down “Meals Ready to Eat.” Jesse spots Lieutenant Davis approaching. “What’re you staring at, Private?” their Commander taunts. “Yer pretty ass, I reckon,” laughs Pensley. Davis flashes a smile. “You brush those teeth with baking soda, sir?” says Jesse. “Okay,” Davis laughs. “And get off the damned Bradley, Pense. You make a fine-ass target, standin’ there.” Pensley thrusts forward with a dramatic groan, standing in mock attention. “At ease, Specialist,” Davis grunts. “Listen, outer patrol is pulling in soon. We’ll escort this convoy to the nearest checkpoint and head for base.” “Thank God,” says Pensley. Davis lifts a handset to his ear. “That’s the call. Get ready to roll—we leave when the outer squad pulls in.” He rounds the Bradley to prepare the platoon, and Jesse suppresses his excitement.
Fall 2013 | Explorations
hatcha think about that?” Pensley asks. “Hot meal on a hot night. Sounds better than pussy.”
“Now isn’t that sad,” Jesse mocks. He calculates their drive—two hours to base and the sun already hovering over the western hills. “Let’s load up.” Pensley climbs in as driver and Jesse mounts the turret. He watches the remaining marines pile in with Davis behind. The lieutenant hoists himself beside Jesse, standing at the turret, and the two watch a growing cloud of dust to the south. A train of beige Humvees stirs dry gravel. Slanted rays of afternoon light pierce the dusty haze, glinting off the metal hoods. “That’s it,” Davis shouts, slamming a hand on the Bradley’s roof. “Move out!” Their vehicle roars to a start, its tracked wheels stuttering, then surging forward. Jesse feels warm air combing the short hair on his neck. He shifts his shoulders to let dry wind leak under his helmet, tingling his suffocated scalp.
“You think we’ll make base?” Jesse shouts. Davis shrugs, pulling out a box of Marlboros. “I’m plannin’ on it,” he says. Davis offers a cigarette to Jesse and the two ignite. Jesse feels the acidic warmth burn down his larynx, stimulating his entire body. The smoke is epinephrine on a dull day, antidote for exhaustion. “I figure we get in right after dusk and no one’ll give too much shit for us drivin’ late.” Jesse feels a flutter in his gut like gratitude. It converts to a lurching growl. “Ah, shit,” Davis spits, throwing out his cigarette. Jesse follows with his optics and spots the source of Davis’s distress. A route clearance team is scanning for IEDs, spraying dust as they inch painfully forward. “They’d have to be an hour behind,” Jesse curses. Davis sighs beside him. “Yeah, well they’re not gonna pack up for us.”
“The world blends to
beige haze and the steady roar of engines drowns out thought.”
They watch the convoy align. Jesse’s platoon anchors, leading their four QRF vehicles. Soon sand from forward Humvees sprays upward, lashing his face. Jesse pulls on his goggles and settles his gaze before them. The sun glows white through a veil of dust, hovering like a hot, bright moon.
The world blends to beige haze and the steady roar of engines drowns out thought. Jesse kicks into an automatic, brain-numbing vigilance. He surveys with the turret optics, breaking the terrain into grids and searching, fighting the voice of his stomach. The sun touches a row of hills to the west when their squad pulls through the checkpoint. They drop off the damaged Humvee with its convoy and continue their pursuit of the sun. Their vehicle now leads. With his turret, Jesse scans the clear horizon looking for dead animals, tires, trash, broken concrete—anything different. They move faster alone, the sun fading behind a jagged, blinding horizon. Maybe an hour left to base if they push through dusk. Jesse turns to Davis and catches him in a stoic trance, staring at the crescent sun. His face is dirty from the many days of patrol. The shadowed lines of his eye darken beneath angled light.
Explorations | Fall 2013
audibly behind him.
The Bradley slows to a halt and they hear Pensley from below. The curses float up in a creative stream . . . “Fuckin’ son of a goddamned mother-fuckin’ pickle-dick prick shit-on-myface!” Jesse fights a grin one moment but groans the next. Their meal is at stake, and the whole platoon knows. Men grumble
“Wait,” says Davis, squinting down the highway. “Check and see if that’s a bypass road. Up to the left.” Jesse glimpses through the optics and nods. “Call command,” says Davis. “See if we can’t get ’em to pause after the fork. We’ll get around ’em.” Jesse dials Control and receives the go-ahead moments later. The men all holler and clap to the news as they circumvent the bomb squad. The Bradley skids back onto the highway with Pensley’s eager maneuverings. “Thank the sweet Lord,” Jesse cries in victory. “We’d’ve been out all night!” Davis grins in relief and their Humvee rolls smoothly in the darkening twilight. “Let’s just hope that . . .”
esse.” Davis leans over Jesse, shaking his shoulders violently and bleeds on him with his cut face, yelling over empty noise in the silence. Jesse realizes he’s outside the vehicle, lying on the ground. There’s light from a fire, flickering through a film of dust. A ringing, throbbing in his ear. Jesse lifts his head.
Gunshots and a whistle blast of falling mortar.
“Jess!” a muted voice, hands on his shoulder. “Jesse, you all right?” Jesse tries to brush them off, to stand. The words reach him like drifting waves.
“One! Two! Heave!”
Screams filter through the empty noise. Steam fills the air. Everything is on fire in the air, but when he looks it’s too dark—just flickering silhouettes racing in the sunlike fire. He feels wet on his face, tastes the thick sweat like dirt and oil wetting his lips. “IED,” Davis shouts, but Jesse can’t hear. There is ringing. And yells and gunfire and the silent throbbing in his ear. “Screaming,” he mutters, tries to stand. “Stop the screaming . . .” Gunshots pepper the air, tracers from M-16s. Someone yells to pull out the driver, pull out the driver. Jesse stands. “Don’t!” someone yells. “Stop him!” Hands are pulling him back and he resists, charging the flames.
“My gun,” Jesse mumbles, trying to stand. His muscles tremble with adrenaline, unsteady. Hands keep him down. A group of soldiers are rocking the driver’s seat of the Bradley.
Jesse knows that Pensley is screaming. He catches a glimpse of metal pinning the kid’s leg. Engine burst through the hood and landed on his lap. Jesse sees the mangled front of their Bradley, twisted and broken like silver glass. A group of soldiers rock the driver’s seat of the Humvee. Pensley is screaming. “You’re gonna be okay, Jess,” says Davis. Fingers clog Jesse’s nose as the flashlight asks him to follow with his eyes. It moves slowly or not at all.
is on fire in the air, but when he looks it’s too dark—just flickering silhouettes racing in the sunlike fire.”
“My gun!” “Jesse,” Davis shouts. “Jesse, stand down!” “Someone get my fucking gun!” Jesse sits, with an orblike sun moving across his gaze. “Broke your nose all right,” a voice tells him. Jesse feels the damp blood draining down his lip and chin. “Probably a concussion, too. Just stay seated, you hear? Evacs’ll be here soon.”
Research for "Chasing the Sun" through the Summer Scholars Program was proudly supported by:
The screams—and everything is steaming. Gunshots fire at falling mortar but the blasts creep forward, breaking dunes of dust in their thunderous leaps. It is dark except for the occasional spark of mortar whistling in crescendo, shuddering against the cool earth.
Air presses down on him. Jesse is lifted and guided to the Medevac, dazed and stumbling. Other men sit and lie with him, but he looks out and up into the sky, glowing dark and starry like the sun had exploded into a thousand fragments of white light. Cool desert air rushes through the rising chopper, and Jesse thinks of summer camping in Texas. Hum of the chopper buzzes into a comfortable warmth and he pictures his father, his brother, hiking in Big Bend. The dusty earth dry. He wonders if they’ll climb the South Rim tomorrow or see the Window at sunset. Hearing the crackle of Dad’s campfire set, he breathes the rugged desert air, resting beneath a hollowed sky.
LIBERAL ARTS TEXAS A&M UNIVERSITY MELBERN G. GLASSCOCK CENTER FOR HUMANITIES RESEARCH
Fall 2013 | Explorations
Cap-and-Trade and Global Compromise By Phillip Warren and Mariah Lord Various attempts at implementing measures to limit climate change, such as Kyoto and Copenhagen, have been ultimately unsuccessful. Nevertheless, there are lessons to be learned for future proposals, including the importance of inclusion of all nations and a binding agreement among all signatory nations. The authors present a proposal, based in part on a Cap-andTrade policy for carbon emissions, to reduce worldwide carbon emissions to pre-2000 levels by 2060. Introduction Climate change represents one of the greatest concerns of this generation. This article examines the climate issues facing the world’s leaders, notes the collective failures of cooperative institutions and explains them, and presents possible solutions—technical and policy oriented. With the track record of climate-conscious policies and the current political atmosphere, a universal compromise similar to our resulting proposal is unlikely. This work does, however, present a policy proposal with cuts to global carbon emissions that would limit climate change to manageable levels (2.8–3.2°C). Background Kyoto Protocol International actions to limit the effects of climate change have spanned decades, but the 1997 Kyoto Protocol represented the first major cooperative solution. The Kyoto Protocol contractually bound member countries to cut carbon emissions by 5.2% compared with 1990 levels. The Kyoto Protocol divides the world into Annex I and Annex II countries, with Annex II countries having no obligation to cut emissions.1 Annex I countries promised to aid developing countries financially and technologically. The United States never ratified the Kyoto Protocol because of these stipulations, arguing primarily that Annex II countries should also curb emissions.1 The lack of U.S. participation remains a primary failure of the Kyoto Protocol. The United States originally agreed to the proposal, but then President Clinton indicated at the time that the United States would not participate if China and India did not also. In 2012, countries agreed to the Doha Amendment to the Kyoto Protocol, which binds Annex I nations 17
Explorations | Fall 2013
to a reduction of greenhouse gases of 18% compared with 1990 levels by 2020.
cap and trade on a per capita basis.
Cap and Trade: Current Examples
The 2009 Copenhagen Accord represents “a multilateral political agreement between the United States, China, India, Brazil and South Africa.”2p966 The nonbinding agreement intends to limit warming to between 1.5 and 2 °C, but estimates indicate that the commitments would limit warming only to just under 4 °C by 2100.2 Although the accord attempts to outline goals in terms of degrees of warming, climatologists at the U.S. Modeling Consortium Climate Interactive indicate that the goals do not match the commitments.2 In contrast to the Kyoto Protocol, the Copenhagen Accord remains nonbinding for participants.
Cap-and-trade policies refer to fixed caps on something, here carbon dioxide, which decline over time. Cap-and-trade policies policies are widely considered a market-based system because they allow for trading of these carbon dioxide allowances, which encourages firms to limit their emissions so they can sell their allowances. According to the European Union, which established the largest carbon-trading scheme in 2005, “there is a ‘cap’, or limit, on the total amount of certain greenhouse gases that can be emitted by the factories, power plants and other installations in the system. Within this cap, companies receive emission allowances which they can sell to or buy from one another as needed” (http://ec.europa. eu/clima/policies/ets/index_en.htm). Although the EU houses the largest comprehensive cap-and-trade system to date, other countries, such as China, South Korea, Australia, and New Zealand, are in the process or have already developed similar systems. The United States has experience with cap-and-trade programs, albeit not associated with carbon dioxide. In 1980, the Environmental Protection Agency established an acid rain capand-trade program to deal with rising sulfur and nitrous oxide concerns with astounding success. Figure 1 highlights some facts associated with the EPA’s acid rain program.
Lessons Learned from Both The proposal in this work attempts to address concerns and failures of previous agreements, allowing for economic development of poorer nations while including these nations in the cap-and-trade system. To further the goal of eradicating poverty, this proposal will focus on per capita emissions, which are vital for human development, instead of focusing on total emissions for a country. Any new proposal must counteract the lack of participation of the Kyoto Protocol while maintaining scientifically feasible goals, something the Copenhagen Accord fails at. The guiding principles of the outlined agreement are as follows: flexibility for individual countries in their policy and decision making, inclusion to bring all countries regardless of development into the fold of a binding agreement, poverty eradication (equality) to afford developing countries economic growth with the aid of developed nations, and
A national carbon cap-and-trade system has yet to gain significant traction in the United States, but certain states have established regional agreements3: • The Regional Greenhouse Gas Initiative covers the power sector
Case Study Success: EPA’s Acid Rain Program • • • •
Program active since 1980 Sulfur and nitrogen oxide emissions reduced by 55 - 60% 580 - 1,800 lives saved from respiratory complications Every dollar spent on administration reaps $40 of health/environmental
Figure 1: Facts about U.S. Environmental Protection Agency’s Acid Rain Program. Source: U.S. Environmental Protection Agency (http://www.epa.gov/captrade/).
of 10 northeastern states and has been active since 2009. • The Western Climate Initiative covers seven western states and has been active since 2012. • The Chicago Climate Exchange is a voluntary market established in 2003 that holds binding contracts with 500 universities, businesses, and cities.
Because approaches to mitigating carbon emissions are controversial among economists, this proposal will not attempt to endorse any specific energy policy. Because an international
overall energy consumption. Establishing baselines and requiring transparency from governments and businesses remain the best methods to avoid overallocation. For an international system, baseline per capita emissions calculations, inclusion of all nations to eliminate leakage, and transparency would represent the primary focuses during establishment.
“International actions to limit
Cap and Trade/Carbon Taxation
the effects of climate change have spanned decades…”
A carbon tax attempts to level the playing field of all renewable and low-emission forms of energy. By removing the need for subsidies, a carbon tax attempts to put all forms of energy generation on a level playing field instead of forcing the government to pick alternative energy winners and losers. According to economist James Griffin, “a carbon tax establishes an observable price that society is willing to pay for CO2 abatement and creates a more level playing field for new technologies.”4p156 Griffin also denounces national capand-trade systems because “a carbon tax would be much more transparent than tradable emissions allowances and potentially less subject to manipulation.”4p7 Some economists recommend a hybrid system, combining aspects of a cap-and-trade and a carbontaxation scheme. A price collar sets a price ceiling and floor between which carbon prices can vary in a capand-trade system.5 Selling emissions permits to industrial sites instead of giving them out represents another hybrid system, generating revenue for governments. Similar to a taxation system, this keeps the market-based trading aspects of a cap-and-trade program.
tax system is not possible, a cap-andtrade scheme remains the focus of this proposal. In the spirit of flexibility, each country would adopt its own carbon dioxide mitigation policy. As the largest current cap-and-trade system, the EU Emissions Trading Scheme presents a viable evaluator for this type of international system. Because the EU deals with a group of countries, their system could serve as a model, allowing the United States to adopt certain policies and modify others. Two major criticisms of the EU scheme exist6: 1. Overallocation of emissions permits has led the price of these permits to plummet during trading. 2. Countries not bound by reduction goals have accepted the heavy industry and resulting pollution. When the EU asks industrial sites for their business-as-usual emissions, these sites can land large profits by overestimating their needs and selling the emissions to other firms. This approach causes overallocation of emissions permits. Changes in the economy, notably an economic downturn, exacerbated the overallocation problem by inhibiting
The ambitious proposal described below can be met, in part, with technical solutions. Several promising solutions include the following: • Nuclear electricity generation7 ◦◦ Nuclear energy generation typically involves controlled fission of uranium, which generates heat. ◦◦ Nuclear energy generation accounts for 20% of the U.S. energy portfolio. ◦◦ Nuclear energy is not currently carbon neutral because of the energy required to harvest nuclear fuel (usually uranium). • Carbon capture and storage8 ◦◦ Carbon capture and storage describes a process of extracting carbon dioxide from fossil fuels during combustion and injecting it back into underground geological formations. ◦◦ According to the U.S. Environmental Protection Agency, 95% of the largest stationary sources of U.S. CO2 emissions are within 50 miles of a candidate geological site. ◦◦ Groundwater contamination remains an environmental concern. Fall 2013 | Explorations
Table 1: IPCC climate stabilization scenarios10 (Boldface reflects data for proposal outlined in this article.)
Stabilization level (ppm CO2-eq) 445–490 490–535 535–590 590–710 710–855 855–1130
Global mean temp increase above preindustrial at equilibrium (ºC) 2.0–2.4 2.4–2.8 2.8–3.2 3.2–4.0 4.0–4.9 4.9–6.1
• Hydrogen9 ◦◦ Pure hydrogen does not exist in nature; it appears in compounds with other elements (such as water or hydrocarbons). ◦◦ Hydrogen can be harvested by electrolysis (separating water into hydrogen and oxygen) and gasification of fossil fuels. ◦◦ Remaining technical issues include the energy necessary to harvest hydrogen as well to store and transport it. Proposal We propose an integrated cap-andtrade system that involves a tiered system of countries that includes all countries, regardless of development. In 2007, the International Panel on Climate Change published Mitigation of Climate Change, reporting on the current state of climate change. This report analyzes both short- and long-
Year CO2 emissions peak 2000–2015 2000–2020 2010–2030 2020–2060 2050–2080 2060–2090
Reduction in year 2050 CO2 emissions compared with 2000 (%) −85 to −50 −60 to −30 −30 to 5 10 to 60 25 to 85 90 to 140
term options for mitigation. Table 1 shows several different stabilization scenarios. The report recommends a stabilization level of 445–490 parts per million of atmospheric carbon dioxide, stabilizing warming at 2 °C, estimating that this would reduce global gross domestic product by at least 3%.10 Considering current global economic conditions and the measures necessary to reach 445–490 ppm, we have determined that 535–590 ppm represents a more realistic target. This revised goal would limit the temperature increase to 2.8–3.2 °C and reduce global GDP by between 0.2 and 2.5%.10 Proposal Focus This proposal measures progress in terms of metric tons of carbon emissions per person, as opposed to capping a country’s emissions solely on total amount; doing so would not account for population changes. By accounting for population changes
Human Development Index (HDI) value (2001)
Year CO2 emissions return to year 2000 level 2000–2030 2000–2050 2020–2060 2020–>2100 >2090 >2100
and estimates, we hope to include social considerations, such as poverty, as an integral part of the goals. This proposal focuses on the rightmost column of Table 1. According to the U.S. Census Bureau, in 2000, global carbon emissions totaled 23,738.368 million metric tons, or 3.88 metric tons per person (http://www.census. gov/population/international/ data/worldpop/table_population. php). By 2060, if we assume a global population of 9 billion, annual per capita emissions would need to fall to between 2.5 and 3.0 metric tons to keep global carbon emissions to year 2000 levels. According to the Intergovernmental Panel on Climate Change, this proposal would result in the planet’s warming by 2.8–3.2°C. Mitigating climate change must not severely limit the advancement of developing countries, which would slow poverty eradication. Recognizing the vast differences between countries, we have divided the countries on the basis of the Human Development Index, a measure that compares factors such as life expectancy, literacy, education, and standards of living for countries around the world. Countries are thus divided into one of four tiers of human development: (1) very high, (2) high, (3) medium, and (4) low (Figure 2). Stipulations and Information
Very high High Medium Low No data
Figure 2: Human Development Index Map. Source: Human Development Reports—United Nations Development Programme. Indices & Data. United Nations. http://hdr.undp.org/en/statistics.
Explorations | Fall 2013
Keep the following in mind when considering the proposal: • The proposal focuses on goals of flexibility, inclusion, and equality. • Countries meet every year to determine per capita limits for the following year, primarily on the basis of proposals submitted
by each country. Emissions trading is done in large quantities and distributed per capita, noting that per capita limits are the prevailing goals. Market prices determine price of carbon permits. An independent United Nations committee would supervise this agreement and determine sanctions. The committee would also place countries into their respective tiers, according to the Human Development Index. Each country would be responsible for its own 50-year plan, according to its baseline per capita emissions level. The proposal sets the world on an equal emissions playing field by 2060.
Tier System Tier 1 is allowed 5 years to increase emissions if necessary and implement technical and policy solutions. Tier 2 is allowed 10 years, tier 3 is allowed 15 years, and tier 4 is allowed 20 years. Beyond that stipulation, each country creates its own emissions reduction plan, subject to UN approval. Table 2 indicates a sample reduction plan for each tier. Conclusion From the European Union Emissions Trading Scheme we note the importance of inclusion to eliminate leakage of carbon dioxide, as well as effective baselines and overall transparency of the agreement. Countries could also adopt or incentivize certain types of technical solutions, including nuclear energy generation, carbon capture and storage, and hydrogen-based transportation. This ambitious policy proposal examines the drastic cuts necessary to reduce emissions and stem the tide of global climate change. The Kyoto Protocol and Copenhagen Accord teach us that inclusion of all countries and binding agreements must remain cornerstones of any future endeavor. Under this proposal, each country would be responsible for creating its own policy for addressing climate change, creating flexibility. Some possibilities include national cap-andtrade systems and carbon taxes, or some combination of the two.
Table 2: Theoretical sample of reduction plans Phase duration Emissions— Emissions— % Change (yrs) beginning of end of phase (metric tons per phase person) Tier 1—United States 5 17.7 15 18.6 10 14.0 10 9.8 10 6.9 Tier 2—China
18.8 14.0 9.8 6.9 2.7
5 −25 −30 −30 −60
10 15 15 10
4.6 8.1 6.1 4.3
8.1 6.1 4.3 2.7
75 −25 −30 −37.5
1.2 8.1 6.1 4.6
8.1 6.1 4.6 2.7
575 −25 −25 −40
Tier 3—India 15 15 10 10
Tier 4—Democratic Republic of the Congo 20 0.04 8 10 8 6 10 6 4.5 10 4.5 2.7 References 1. Tollefson J. Showdown nears for climate change deal. Nature 2011;479:454–455. doi:10.1038/479454a. 2. Tollefson J. World looks ahead post-Copenhagen. Nature 2009;462:966–967. 3. Kim HS, Koo WW. Factors affecting the carbon allowance market in the US. Energy Policy 2010;38:1879–1884. 4. Griffin JM. A Smart Energy Policy: An Economist’s Rx for Balancing Cheap, Clean, and Secure Energy. New Haven, Conn.: Yale University Press, 2009. 5. Cleetus R. Finding common ground in the debate between carbon tax and cap-and-trade policies. Bulletin of the Atomic Scientists 2011;67:19–27. 6. Stephenson J. Lessons Learned from the European Union’s Emissions Trading Scheme and the Kyoto Protocol’s Clean Development Mechanism. Washington D.C.: U.S. General Accountability Office, 2008.
2000 −25 −25 −40
7. Snedden R. Nuclear Energy. Chicago: Heinemann Library, 2002. 8. U.S. Environmental Protection Agency Office of Water. EPA Proposes New Requirements for Geologic Sequestration of Carbon Dioxide. Report EPA 816-F-08032. Washington, D.C.: EPA, July 2008. http://www.epa.gov/ safewater/uic/pdfs/fs_uic_co2_ proposedrule.pdf. 9. Ogden J, Rubin ES. The outlook for hydrogen cars. Resources Magazine, March 2009. http:// www.rff.org/Publications/WPC/ Pages/03_09_09_Outlook_for_ Hydrogen_Cars.aspx 10. Intergovernmental Panel on Climate Change. Summary for policymakers. In Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, edited by B. Metz, O.R. Davidson, P.R. Bosch, R. Dave, and L.A. Meyer. Cambridge, UK: Cambridge University Press, 2007. Fall 2013 | Explorations
Mother Nature and
y painting represents Mother Nature, suffering from the gray â€œstormâ€? happening around her. I chose this subject because I have always been interested in nature and how the creatures on Earth are affecting it every day. Furthermore, like many others, I find much inspiration in the curves and grace of the female body, so I decided to portray the central theme of Mother Nature by creating an elongated version of a female silhouette. I think this elongation creates a feeling of otherworldliness, of something larger and more magical simply disguised as human. My depiction of Mother Nature continues in a common image: the tree. Although it is a symbol often used to represent nature and its fruits, my interpretation is slightly darker. The tree represents everything affecting Mother Nature: animals, humans, and our activities. All these things are, in a way, draining the life from nature. At its base, my tree is golden and full of vitality, but as it expands, its branches become twisted and darker, with life being lost, until its branches begin to get lost in the surrounding storm. The subject of lost life is darker than the inspiration for most of my paintings. In fact, I initially intended the tree to be lush and full of leaves and for the surroundings to be bright white. However, the more I thought about it, the more I began to gravitate toward a darker perspective, one more focused on black and white as opposed to color. The use of black and white, however, is not new to my paintings. The contrast of black and white fascinates me, somewhat because of an immune disorder called vitiligo. This condition causes parts of my skin to become depigmented, creating patches of milky white skin, which contrast against the normal brown tone of the rest of my body.
Explorations | Fall 2013
d the Coming Storm
Color and contrast are not the only things that fascinate me, however. Artists such as Dan Dos Santos and Brian Viveros are a great influence on the style of my works. Both create traditional-media paintings: they use oils and paintbrushes as opposed to the computer software methods beginning to take the art world by storm. Dos Santos creates pictures of fantasy worlds and beings. His characters and their actions need not be naturally shaped or styled; he draws them all from his imagination. He inspires me to let my own imagination run free and to not be restrained by convention. Viveros’s works are different: he creates strong paintings of young, raven-haired women, each with a different expression symbolizing the strength they possess. Viveros’s paintings remind me that a woman need not be depicted as a symbol of frailty but can show strength and still remain beautiful. Using traditional media allows me to engage my emotions when I paint, which is why I prefer it to any other method. Still, emotional as the creation of each work might be, I created many sketches to prepare for this painting. Sheets full of female figures in different shapes, sizes, and proportions dominated my sketchbook during the past few weeks, and although it was a long process, it allowed me to arrive at my finished product. The figure I chose was the one that had the subtlest changes to the natural proportions of a woman. The elongation of the figure I chose is slight enough to not be very noticeable on first sight: the viewer is instantly aware that something is different but has a hard time describing what exactly is altered. This feeling of something being a little bit “off” is something that I love to see in artwork, because it keeps the viewer engaged, looking, thinking, and wondering.
Fall 2013 | Explorations
“Two if by Sea”:
Modern Archaeological Research into Arrival of the First Americans By Thomas Colvin The common theory of how people first came to the Americas is the Bering Land Bridge hypothesis. However, recent evidence has brought this theory into doubt, and now archaeologists support a Coastal Migration Theory. This research examines the merits of both theories.
Introduction The old phrase that this title borrows, made popular by Paul Revere’s ride, has a much older application to American history than scholars are used to seeing. Archaeologists believed for decades that the first Americans walked into North America from Alaska approximately 13,000 years ago through an ice-free corridor. The archaeological evidence seemed to support this belief, and the question of when and how people came to the Americas appeared settled. However, this theory could not account for the discovery in southern Chile of the oldest well-established archaeological site in the Americas, Monte Verde, and “the door [was] kicked fairly wide open to anything,” as the lead archaeologist at Monte Verde stated.1p1467
Drawing on many sources, this report presents the merits of the coastal migration theory in the context of recent research and shows why most archaeologists favor it as an accurate representation of when and how the first people entered the Americas (Figure 1). Ice-Free Corridor/“Clovis First” Main Premises Recent biological and linguistic evidence has established that the first Americans originated in Siberia in northwestern Asia.2,3 Until the past decade, the ice-free corridor/“Clovis first” theory was the best-known and
Figure 1: Left: proposed ice-free corridor migration. Right: proposed coastal migration. Note the differences in the glacial expanses in Canada. Both images courtesy of James E. Dixon (2000). 23
Explorations | Fall 2013
widely accepted explanation for when and how these Siberians entered the Americas.3 During the 20th century, archaeologists agreed that the Clovis culture, dated to approximately 11,500 years ago and found across North America, represented the oldest American occupation. The lack of valid pre-Clovis sites encouraged this theory. The ice-free corridor theory consists of four main premises3: 1. People entered the Americas approximately 13,000 years ago. 2. They did so by following large game from southcentral Siberia into Alaska across the Bering land bridge. 3. These hunters then moved through an ice-free corridor that opened up approximately 13,000 years ago between the ice sheets that covered northern North America. 4. Once in North America, these hunters developed the Clovis tradition and rapidly spread over the human-free Americas. Primary Flaws
the first Americans did migrate down the Pacific coast, the remains of most of their settlements would have been long submerged underwater.”
What Is Coastal Migration? Main Premises The theory of coastal migration states that, between 20,000 and 14,000 years ago, people migrated from the Bering land bridge in the north to Chile in the south.3 Because no icefree corridor existed between 24,000 and 13,000 years ago, these people must have migrated down the Pacific coastline of the Americas. Although the ice-free corridor was iced over at this time, pockets of deglaciated land, known as refugia, existed along the southern Alaskan coastline as early as 17,000 years ago.3,7 Migrants probably used these refugia, such as the Port Eliza Cave region discussed later, as both a refuge during bad weather and a resupplying station.8 From such refugia, the migrants could more easily travel south of the ice sheets, settle along the coastline, and penetrate into the interior. Modern archaeological, geological, and biological excavations and experiments support these conclusions.
The coastal migration theory was first proposed in 1979.7 However, most archaeologists did not give coastal
Archaeologists dismissed the idea of a coastal migration for various reasons: • The belief that glaciers covered the northern coastline • The lack of evidence of sufficient watercraft among humans at this early date • The notion that the first Americans, like the Clovis people, had to be big-game hunters who had not adapted to a coastal environment or a marine diet
Over the past decade, this theory has lost most of its sway in the archaeological community because archaeologists have discovered and validated many pre-Clovis sites in North and South America.4,5 The ice-free corridor theory offers no explanation for these sites. The establishment of pre-Clovis human occupation of the Americas is the primary reason why most no longer consider an overland migration to have been the route that the first Americans used.3 Some archaeologists who still subscribe to the ice-free corridor theory have tried to modify it, suggesting that people migrated through the icefree corridor before it closed approximately 24,000 years ago.6 However, this hypothesis lacks support from the current research.
migration serious consideration until relatively recently. The wide acceptance of the Clovis-first model and the lack of valid pre-Clovis sites made considering alternative routes of entry seem unnecessary.
During this time, evidence for coastal migration was scarce, stemming from the location of the migration. If the first Americans did migrate down the Pacific coast, the remains of most of their settlements would have been long submerged underwater. As the Pleistocene epoch (1.8 million–10,000 years ago) ended, the ice sheets melted, and the sea level rose by tens of feet. However, new archaeological finds in North and South America are revealing evidence of human occupations that predate the Clovis people.4,5,8,9 Other scientific fields, such as geology, ecology, and biology, support these archaeological finds.2,3,7 Together, the evidence indicates that— • The northern coastline had deglaciated and developed a habitable environment at an early time; • The first Americans had developed watercraft sufficient for traveling down the coastline; and • They had adapted to a coastal lifestyle Because these recent discoveries all lend credence to the plausibility of a migration down the Pacific coast from Bering land bridge to Chile, most archaeologists now support the coastal migration theory. The Case for Coastal Migration News from the Islands Sanak Island and Port Eliza Cave Some key places that archaeologists use for evidence of a Fall 2013 | Explorations
coastal migration are offshore islands. Sea levels were much lower 20,000–13,000 years ago, so these islands were larger and closer to the coastline during the time of migration. Therefore, archaeologists look here primarily for evidence that the environment could support human life, as is the case with Sanak Island and Port Eliza Cave, islands located off the Pacific coasts of Alaska and Canada, respectively. At Sanak Island, archaeologists, geologists, and palynologists (scientists who study pollens) teamed up to discover when glaciers first disappeared from the island and when the island developed an environment that could support human habitation. The scientists studied sediment core samples from three different lakes on Sanak Island,7 concluding that Sanak Island deglaciated about 17,000 years ago and had developed a habitable environment by 16,300 years ago.7 Because Sanak Island lies along the proposed coastal migration route, these findings establish that people could have been migrating down the Pacific coast as much as two millennia before people inhabited Monte Verde, discussed later.
Kelp Highway Hypothesis Recently, ecologists and archaeologists began reconstructing the ecology and geography of the late Pleistocene Pacific coastline. Overall, the results yield a more favorable environment than the ice-free corridor inland would for a migration, with both marine and terrestrial resources, fresh water, and few geographical obstructions after 16,000–15,000 years ago.6 However, the most important factor of the ecological reconstruction is a band of kelp fields that stretches almost the entire way around the Pacific coastline from Japan to the southern tip of South America.6 This “kelp highway” offers habitat to shellfish, fish, marine mammals, seabirds, and seaweed, all of which early coastal peoples used in sites such as the Channel Islands.6 Current archaeological data seem to confirm the hypothesis that coastal migrants valued the kelp highway, because “along the Pacific Coast of North America, some of the earliest archaeological sites are found in island or mainland coast settings adjacent to productive kelp forests.”6p171 Although ecologists and archaeologists cannot prove that the first Americans followed a coastal migration route into the Americas, recovered data increase the plausibility of the coastal migration theory.6
‘kelp highway’ offers habitat to shellfish, fish, marine mammals, seabirds, and seaweed, all of which early coastal peoples used in sites such as the Channel Islands.” Archaeologists and geologists have also studied Port Eliza Cave on Vancouver Island, with similar results.8 In the archaeologists’ opinion, their results “confirm the viability of the coastal migration hypothesis for this portion of the route.”8p1383 Santarosae The Channel Islands off the coast of California offer evidence of human settlement concurrent with Clovis.10 These islands are formed from what was once a single island, known as Santarosae, but rising sea levels submerged the low-lying shorelines. From Santarosae sites—Daisy Cave and Arlington Springs—archaeologists have recovered enough evidence to show that people first occupied Santarosae between 13,000 and 11,500 years ago, the same time that Clovis culture formed on the mainland.10 The discoveries made in the Channel Islands reveal crucial information regarding the coastal migration theory. First, Santarosae was located 9–10 km off the California coast during the late Pleistocene, meaning that the early colonizers had watercraft sufficient to travel at least that far out to sea.10 Second, assemblages of animal remains show that the colonizers had adapted to a marine diet, a fact that contrasts sharply with the ice-free corridor theory.10 25
Explorations | Fall 2013
Biological studies have revealed when the first Americans originated and when they entered the Americas.3 In 2004, Schurr described the results of experiments performed on mitochondrial DNA (mtDNA) strands of Native Americans past and present. mtDNA can be extracted from bone or blood, and scientists can use mtDNA to trace an individual or population’s lineages back millennia.3 A more recent study analyzed some of the rarer mtDNA strands found in modern Native American populations.2 Both studies produced a window of 15,000– 20,000 years ago for the first migration, lending support to a pre-Clovis coastal migration into the Americas. Because the ice-free corridor was not open during these early dates, the coastal migration theory appears to be the only explanation.3 Monte Verde: The Keystone of Coastal Migration Monte Verde, located at the southern tip of South America, is one of the few pre-Clovis sites to have secure and validated dates, and it has become the most important site for proponents of the coastal migration theory. A peat bog that covered the site preserved many organic artifacts not usually found in archaeological excavations.9 Archaeologists have recovered stone, bone, and wood artifacts from Monte Verde, important when establishing the culture of the site.9 However, for dating Monte Verde, clusters of preserved algae revealed much more. The algae are important for two reasons. First, Monte Verde is located 90 km from the late Pleistocene shoreline. The presence of the algae at Monte Verde indicates either that the residents trekked regularly to the coast and back or that they had established trade networks with other groups of
people living along the coast.9 Second, unlike stone, bone, and wood, algae has a short, datable life span. Therefore, archaeologists can more accurately date its time of usage at Monte Verde, which returned results of approximately 14,000 years, meaning that people must have occupied Monte Verde at least that long ago.9 Data gathered from dating the stone, bone, and wood artifacts give even earlier dates, between 14,200 and 14,600 years ago. Although these dates do not completely match, they do have important implications for how scholars approach the question of how and when the first Americans arrived, as do the existence and features of Monte Verde as a whole.9 Most important, the dates of Monte Verde are well before people are believed to have been able to migrate through the ice-free corridor, indicating that the first Americans must have come by some other means, probably a coastal migration. References 1. Holden C. Were Spaniards among the first Americans? Science 1999;286:1467–1468. 2. Bodner M, Perego UA, Huber G, et al. Rapid coastal spread of first Americans: Novel insights from South America’s Southern Cone mitochondrial genomes. Genome Research 2012;22:811–820. doi:10.1101/ gr.131722.111. 3. Schurr TG. The peopling of the New World: Perspectives from molecular anthropology. Annual Review of Anthropology 2004;33:551–583. 4. Begley S, Miller S. The first Americans. Newsweek, Sept. 1, 1991. 5. Miotti LL. Patagonia: A paradox for building images of the first Americans during the Pleistocene/ Holocene transition. Quaternary International 2003;109–110:147–173. doi:10.1016/S10406182(02)00210-0. 6. Erlandson JM, Graham MH, Bourque BJ, et al. The kelp highway hypothesis: Marine ecology, the
Conclusions On the basis of the evidence presented, I agree with most modern archaeologists that the coastal migration theory most accurately describes how and when the first Americans arrived. Evidence from many scientific fields points to the Pacific coast for the location of first human migration in the Americas. However, the coastal migration theory is still only a theory, and one major find could change all that we know, or think we know, about the first Americans.
coastal migration theory, and the peopling of the Americas. Journal of Island and Coastal Archaeology. 2007;2:161–174. 7. Misarti N, Finney BP, Jordan JW, et al. Early retreat of the Alaska Peninsula glacier complex and the implications for coastal migrations of first Americans. Quaternary Science Reviews 2012;48:1–6. doi:10.1016/j.quascirev.2012.05.014. 8. Ward BC, Wilson MC, Nagorsen DW, et al. Port Eliza Cave: North American west coast interstadial environment and implications for human migrations. Quaternary Science Reviews 2003;22:1383–1388. doi:10.1016/S0277-3791(03)00092-1. 9. Dillehay TD, Ramírez C, Pino M, et al. Monte Verde: Seaweed, food, medicine, and the peopling of South America. Science 2008;320:784–786. 10. Erlandson JM, Rick TC, Braje TJ, et al. Paleoindian seafaring, maritime technologies, and coastal foraging on California’s Channel Islands. Science 2011;331:1181–1185.
Fall 2013 | Explorations
Honors and Underg
Honors and Undergraduate Research provides high-impact educational experiences and challenges motivated students in all academic disciplines to graduate from an enriched, demanding curriculum. The programs administered by the office bring together outstanding students and faculty to build a community of knowledgeproducers, life-long learners, nationally-recognized scholars, and world citizens. Through Honors and Undergraduate Research, students have access to honors courses, co-curricular enrichment activities, and research programs that can be customized to enhance their personal, professional and intellectual development. Honors and Undergraduate Research 4233 TAMU College Station, TX 77843-4233 Tel. 979.845.1957 Fax. 979.845.0300
• Honors • Univers Honors and Undergraduate Research challenges all motivated and high-achieving Texas A&M students to explore their world, expand their horizons and excel academically. While some services of the office are exclusive to Honors • Univers Students, advisors are available to talk with any student who is interested in sampling the academic challenge of an Honors course, committing to an undergraduate research project, applying to the Honors Fellows Program, or engaging the process of self-discovery entailed in preparation for national fellowships such as the Rhodes, • Honors Goldwater, or Truman Scholarships. • Nationa • Underg • Researc • Grant a • Explora
Honors and Honors and Undergraduate Research oversees theUndergraduate following programsResearch and services: • • • • • • • •
Honors Student Advising University Scholars ProgramCollege Station, TX 77843-4233 University Studies - Honors Degree National Fellowship AdvisingTel. 979.845.1957 Undergraduate Research Scholars Program Fax. 979. 845.0300 Research Experience for Undergraduates Assistance Grant and Proposal Assistance Explorations, the Texas A&Mhttp://hur.tamu.edu Undergraduate Research Journal
Honors and Undergraduate Research joins the university community in making Texas A&M a welcoming environment for all individuals. We are committed to helping our students understand the cultures that set us apart and appreciate the values that bring us together. hur.tamu.edu
Mapping the Subsurface with Marbles and Wrapping Paper The task was simple: create a laboratory exercise for middle school students centered around seismology, the study of seismic waves. The point of creating this exercise: to improve public education on the science behind the petroleum industry.
By Andrew DeCheck
Geologists and petroleum engineers use seismology in complex techniques to map the subsurfaceâ€”geologic structures underground that we cannot observe directly. Petroleum engineers use the mapped data to indicate the best areas to drill for oil and gas.1 Geologists also can use maps of the subsurface to locate desirable hard-rock minerals2 and predict earthquakes and other natural phenomena.3 In the most basic sense, mapping the subsurface involves pumping seismic waves into the ground and recording how long it takes for the sound to hit a target zone in the subsurface and return to the surface. Because the speed of sound depends on the medium in which sound travels, the different travel times and the strength at which the waves return to the surface tell us about the composition of the subsurface.4 In the past explosives were used to generate seismic waves, but today we use a machine called a vibroseis truck. This truck has a large pad on its bottom that is lowered to the ground and vibrated to produce seismic waves.4 The tools used 29
The most important use for these techniques is to find oil and natural gas. Originally, people found oil by drilling at easyto-see indicators of petroleum in the subsurface,4 such as places where oil is seeping out of the ground. A large mound called an anticline (Figure 2) can also indicate oil underground. Over the past century, almost all these formations have been discovered. Today, we need seismic data to find the subtler productive areas to drill.
Seismology is an important tool used to discover new areas of possible drilling sites for oil and natural gas, and to more accurately predict the likelihood of natural disaster. Recent advances in technology have allowed vivid 3D models of these areas to be generated by computers. An experiment involving marbles and wrapping paper serves as a demonstration of seismology to a group of middle school students, making it an effective outreach and teaching tool.
Explorations | Fall 2013
to record the waves, called geophones, are arranged in various spatial patterns on the surface to maximize the number of sound waves recorded.4 However, this process is not always so simple. Complications arise with highly heterogeneous media,5 which can cause incoherent noise that drowns out the echoes that we need to accurately image the subsurface.5 To compensate, geologists filter the data before imaging by using layer annihilators, which suppress the noise and thus improve the SNR, or signalto-noise ratio.5 These techniques are necessary to accurately image thousands of feet below the surface. The technology is now so advanced that we can create vivid 3-D maps of the subsurface (Figure 1).
Figure 1: Map of the subsurface.
A current area under exploration is West Africa, where geologists have created more than 30,000 square miles of 3-D subsurface maps within the past 10 years.6 Companies began searching for oil in this area in the 1950s, but current techniques have greatly increased the quality of their data. Many other areas of exploration are offshore. Instead of using the truck approach, we string lines of geophones behind boats and use a large air gun to send a seismic pulse to the ocean floor.7 The same filtering methods used to clean up seismic data on land are used in shallow water, but these do not work in deep water. Within the past 20 years, we developed a technique for deep water called surface-related multiple elimination (SRME). SRME works in complex cases where simple filtering methods do not.7 The development of SRME vastly improved detailed image quality in the Gulf of Mexico and elsewhere.
Another important use of seismology is to analyze and predict earthquakes and other natural phenomena.8 The familiar logarithmic Richter scale quantifies intensity of subsurface waves leading to earthquakes; 1 represents the weakest waves and 10 represents the most severe waves.9 The method of recording earthquakecausing waves is the same as that for seismic mapping, except that the earth provides the vibrations for us. To actually predict earthquake locations, geologists must analyze the subsurface as they do in the search for oil. With a firm understanding of plate tectonic interactions and fault zones, geologists can use 3-D subsurface maps to warn people of imminent disasters. This field of study usually requires time-sensitive 3-D maps, which are sometimes called 4-D maps because time represents the extra (fourth) dimension.8 Such maps are harder to produce and often are not updated often enough to predict natural disasters such as earthquakes, tsunamis, and volcanic eruptions within a reasonable time frame and to a desirable probability. This obstacle leads to billions of dollars wasted during false-alarm evacuations and hundreds of thousands of deaths when a warning is issued too late.9 Without our current seismological techniques, we would have to waste billions of dollars drilling dry holes in the search for oil and gas.4 Now that we have such clear subsurface images, dry holes have become anomalies.
Geologists are still trying to perfect techniques for finding hard-rock minerals and can hope to soon locate rarer earth minerals in America to compete with Figure 2: An anticline China.10 You need only examine the 2010 earthquake in Haiti, The person performing the experiment drops marbles into each hole, one at the 2010 Eyjafjallajökull eruption a time. He or she uses a stopwatch, in Iceland, or the 2004 tsunami in accurate to the millisecond, to the Sumatra-Andaman Islands to precisely record the time from drop realize the importance of perfecting seismological techniques of predicting to clink. To minimize error, he or she natural disasters.9 Fortunately, because drops five marbles into each hole and averages each set of times. By using a the seismological methods used in all simple constant acceleration formula these fields are related, advancement from physics, depth = 0.5gt2, he or she in one field probably means can convert drop times to depths. advancement in all the others. I wanted to show an actual seismic acquisition to the middle school students, but quality 3-D seismic data typically cost $60,000 per square mile. Moreover, a typical seismic job covers 30 square miles, which would cost almost $2 million.
Needing a cost-effective analogy, I developed the following concept: replace the vibroseis trucks with marbles and the sound paths with wrapping Actual Depth paper tubes. I would 1 2 3 4 build a tall, opaque box with 16 holes in the 4 top, arranged in a 4 x 4 square. From outside 3 the box, the holes would all appear to lead to the same depth. 2 However, I would cut each tube to a different length. Therefore, the 01 inside of the box would 20 remain a mystery. Just as we cannot see 40 the subsurface when 60 drilling for oil, we cannot see the inside of 80 the box. However, with 100 a few marbles we can 120 find out everything we need to know. Figure 3: Data from measuring each tube length.
How do we know that the person has a quick enough reaction time to measure such short time discrepancies? I had a feeling the experiment would work accurately enough because in 2008 I used to play a game called “Guitar Hero.” High-definition TVs were beginning to sell in abundance around this time. A slight lag occurred between audio and video on these new TVs. It didn’t matter for watching movies or television. It didn’t even matter for most video games. But for “Guitar Hero” it was crucial. To play this game you had to have the audio precisely match the video. Thus, the game developers included an option to properly calibrate the lag. If this calibration was off by so much as 5 milliseconds, playing the game became very difficult. If we assume that humans are accurate to within 5 milliseconds, the theoretical maximum measured depth error is only 1% for a tube length of 1 meter. Using Microsoft Excel, I converted the depths into a 3-D surface representing the inside of the box. I then compared the calculated depths with the depths of each tube measured when I built the box. A few engineering students performed the experiment and arrived at a 4.8% error in time, corresponding Fall 2013 | Explorations
Engineer Measured Depth 4
Middle School Measured Depth 4
Figure 4: Engineer-measured depth showing a measured depth average error of only 9.42%.
Figure 5: Another similar surface from the middle school student with a depth error of 12.5%
to a 9.4% error in depth. I performed the experiment and found a 3.9% error in time and a 7.6% error in depth. After performing hundreds of labs as an engineering student, I have never consistently gotten data this accurate. The time-lapse video at http://www. youtube.com/watch?v=NdiR60uTVuY shows how to build the box and perform the experiment.
got to show them something out of the ordinary. I told them that I am a petroleum engineering student at Texas A&M. I then gave a presentation about seismology and taught them about its significance to the petroleum industry. They seemed interested and asked many questions. When I asked for volunteers for my experiment, almost all of them raised their hands.
I was ready to test the experiment on middle school students, the original target audience. They seemed excited. I introduced something new and
I was a little nervous because I really wanted the experiment to work as well as it did with my college-aged peers. The lab manual I gave the students
References 1. Paul PK. A methodology for incorporating geomechanically based fault damage zones models into reservoir simulation. Ph.D. diss., Stanford University, Stanford, Calif., December 2007. 2. Greenwood A, Urosevic M, Pevzner R. Feasibility of borehole reflection seismology for hard rock mineral exploration. SEG Technical Program Expanded Abstracts 2010, pp. 1794–1797. doi:10.1190/1.3513190. 3. Kerr RA. 2011. New work reinforces megaquake’s harsh lessons in geoscience. Science 2011;332:911. doi:10.1126/ science.332.6032.911. 4. Hyne NJ. Nontechnical Guide to
Explorations | Fall 2013
Petroleum Geology, Exploration, Drilling, and Production. 2nd ed. Tulsa, Okla.: PennWell Books, 2001. 5. Gonzalez del Cueto F. Filtering random layering effects for imaging and velocity estimation. Ph.D. diss. 3362228, Rice University, Houston, 2008. 6. Greenhalgh J. Merged 3D surveys offshore West Africa identify new petroleum potential. World Oil 2011;232 (November 2011). http://www.worldoil. com/November-2011-Merged3D-surveys-offshore-WestAfrica-identify-new-petroleumpotential.html. 7. Liner C. 2010. What’s new in exploration. World Oil June 2010.
outlined the entire procedure and highlighted important details such as making sure to remove the marble after each drop. They were good at timing the drops and did well plugging in the times to the drop equation. They did so well that they managed a 6.5% error in time, corresponding to a 12.5% error in depth (see Figures 3–5). The students executed the lab successfully, and the entire class was excited to learn about the impact of seismology on the oil and gas industry.
8. Jia T. 2011. Advanced analysis of complex seismic waveforms to characterize the subsurface Earth structure. Ph.D. diss., Columbia University, New York, 2011. http://academiccommons. columbia.edu/download/ fedora_content/download/ ac:137452/CONTENT/Jia_ columbia_0054D_10300.pdf. 9. Rosen S. The socioeconomic effects of earthquakes, volcanoes, and tsunamis. Master’s thesis, Cooper Union for the Advancement of Science and Art, New York, May 2011. 10. Jiao X. Unsupervised target detection and classification for hyperspectral imagery. Ph.D. diss., University of Maryland, Baltimore County, Md., 2010.
Engineering Design Aristotle’s Poetics as a framework for Engineering Design Aristotle’s Poetics By Justin Montgomery Currently, there are essentially two methods used in engineering design: one method, known as human centered design, focuses on the artistic style and emotional appeal to the user. The alternative, traditional engineering design, focuses on the technical and functional working of the design. Each philosophical approach to design would benefit from incorporating both methods of design. Here, both methods are examined in the light of the ancient Greek philosopher Aristotle’s Poetics, which laid a foundation for understanding design.
Introduction Human-centered design (HCD) is a design philosophy that has been key to the success of companies such as Apple that appeal to consumers on an emotional level. HCD helps designers incorporate users’ values, emotions, and needs into products. This approach differs from more conventional modern engineering design methods, which are founded on a systematic approach in which designers follow an explicit process to define an artifact or system to meet a need. Traditional methods consider the user in defining requirements at early stages of design but do not go as far to consider the user holistically.
he deconstructed tragedy and epic poetry to determine how they were and should be created.1 Although Poetics may at first seem irrelevant to the design of technical artifacts, a closer examination of the text, the original meanings of its words, and the context in which it was written reveals that the design principles Aristotle established apply beyond poetry. The word poetics actually derives from the Greek word poiesis, which means “making things” or the “science of production.” (To avoid confusion with the text itself, I will use poiesis here to refer to the principles and theoretical framework for design laid out in the Poetics.)
HCD methods alone are not an adequate replacement for engineering design methods. Methods for HCD tend to focus on the human-centered parts of the design process without establishing clearly how they fit into a more comprehensive design framework. Consequently, they have limitations in addressing technical and nonhuman aspects of engineering design.
The concept of art that poiesis addresses comes from the word techne, which describes the transaction between an intelligent being and the intelligible world and is related to our word technology. Tragedy and epic poetry served as example media of poiesis, and other authors have interpreted poiesis as being applicable to other creative outlets, such as visual art and music.2 On this basis, this work from Aristotle could also be relevant to engineering design.
Engineering designers and the users of engineered products alike would benefit from a holistic approach to design that has the strengths of both established engineering design methods and HCD methods. However, one probably could not achieve such a goal by arbitrarily combining existing methods. The aim of this research was to clarify and strengthen the connections between engineering design and HCD by viewing each as a specialized realization of a general method for designing that was described in one of the earliest texts on design: Aristotle’s Poetics.1 This method will be referred to as poiesis, the original Greek name that Aristotle gave it.
Since the times of Greek antiquity, and especially with the advent of the industrial revolution, technology and art split from techne into distinct areas. Technology and engineering aligned more closely with scientific knowledge. However, engineering design should ideally contain an awareness of both the technical and the empathic, humanistic aspects of an artifact. The Poetics has been recognized as having significant direct and indirect influence on our current ideas about the design of useful objects.3 The context in which Aristotle wrote the Poetics makes it highly relevant to the goal of uniting traditional engineering design methods and HCD methods.
What Is Poiesis?
Poiesis is not the final designed product but rather the art of creating it. Aristotle identified poiesis as being fundamentally concerned with mimesis, or imitation of
“Poiesis is not the final designed product but rather the art of creating it.” but rather the art of creating it.” “Poiesis is not the final designed product
In the fourth century BCE, Aristotle wrote Poetics, in which
Fall 2013 | Explorations
Table 1: Elements of poiesis and how they apply to engineering design
Element of poiesis
In engineering design
Story of poem, beginning and end of it
Qualities that poet instills in characters and poem
Purpose and context of design (contextual needs, functional model, solution, neutral problem statement) How to realize purpose (concept generation, concept evalation)
Discursive thought about poem and themes present in it
Analysis of design (failure modes, material/manufacturing selection, cost)
Words used for delivery of poem to audience
Style, user value/emotional benefit
Songs and artistic value in poem
Aesthetics and artistic value of design
Nonsubtle evoking of emotion through something superficial (special effects)
Superficial adornment, or “skin” of design
action. This concept refers to the rationalist approach humans use when they imitate nature’s creative forces. Poiesis can be identified based on four constituents that define the context it takes place in: matter, agent, goal, and form. The fundamental nature of poiesis is the same for all types of design and creation, but matter, agent, goal, and form will vary depending on the context they are being applied to (such as tragedy, music, or a consumer product). In all contexts, design should deliver to the audience pleasure, or emotional value, of an appropriate type for the function and context.
the same time they mirror the process by which a designer must think about the construction of the product. Table 1 describes these elements in the context of poetry and engineering design.
Recognizing the connections between poiesis and the engineering design process has advantages. First, poiesis offers a framework that integrates aspects of HCD into the overall design process. Like HCD, poiesis is fundamentally concerned with the value that artifacts bring to the user. The basic idea of poiesis is the creation of something that delivers specific emotions to an audience. Also, recognizing that poiesis is applicable to engineering design will help to establish engineering design as just one realization of an overarching discipline of design. By linking engineering design to a general discipline of design, further design research can proceed as a more coordinated effort between disciplines such as architecture and management. Poiesis also offers an interesting theoretical framework for design, identifying the different elements, or stages, of design with their relative importance. It describes how these elements flow and the mental state required of designers at each. A critical analysis of current design methods in relation to poiesis may indicate weaknesses and draw attention to potential areas of improvement in engineering design methods and research.4
To determine the similarity between poiesis and current design methods, we compared five representative methods with the 24 principles from poiesis. We sought to see which principles are used in each method. We did not necessarily expect any method to exhibit all the principles but were interested in which principles were present in the methods. We chose three more traditional and widely cited engineering design methods: Pahl and Beitz,5 Ulrich and Eppinger,6 and Cross.7 We also selected two HCD methods: Jordan,8 which is widely cited, and Boatwright and Cagan,9 a more recent text. We expected that HCD methods would contain principles of poiesis that dealt with delivering emotional value to an audience.
Aristotle recognized six elements of how poiesis takes place: plot, character, thought, diction, melody, and spectacle. The four constituents previously mentioned (matter, agent, goal, and form) are what poiesis is—its context—whereas these latter elements are the stages of how it happens. These elements progress from the most fundamental aspects of a design to the more superficial, less important ones. At 33
Explorations | Fall 2013
We proceeded to break up these elements into 24 principles to describe the overall method of poiesis. Doing so allowed a semantic comparison of these principles of poiesis with current design methods. Analysis
Conclusions Comparing traditional and HCD methods in the context of poiesis revealed interesting trends. The 24 principles found in poiesis were largely accounted for but present in various degrees across the five selected methods, indicating that the ideas of poiesis are present in current engineering design methods. Because poiesis is a general approach to design, the principles in it are likewise generalized and apply to various subdisciplines of design such as architecture, poetry, and engineering design. No one design method examined agreed completely with the ideas of poiesis. Each method examined had between eight and 17 principles, of the 24 surveyed, in common
with poiesis. The traditional methods tended to have more similarity at the earlier stages, whereas the HCD methods had more similarity in the later stages. Instead of arbitrarily combining HCD and traditional methods, one can use poiesis as a framework to fit the methods into the overall design process. Furthermore, one can construct a hybrid method, based on degree of similarity, that better aligns with poiesis by drawing from several different methods. The results of the semantic analysis also yield a better understanding of the limitations and issues of traditional and HCD methods and how they relate. The HCD methods have emerged more recently owing to the inability of traditional methods to handle certain elements of a design, such as the artistic value, or style, and how emotional values are communicated to the user. These are the later stages in poiesis. The HCD methods have not, however, been readily adopted as superseding design methods because they do not incorporate the stages of the design process that deal with the artifact’s technical and functional side—the earlier stages of poiesis. They also do not readily indicate how they fit with traditional methods. This is a significant observation because some have claimed that HCD methods should have supremacy over traditional design approaches—that HCD is the starting point and not a later stage of refinement.10 Although Aristotle suggests starting with an understanding of the user, delivery of emotional benefits is not addressed until later in poiesis, once the design has been more established. This assertion of supremacy may have hindered the adoption of HCD because it does not account for the reality of how design must proceed. HCD methods generally start with this understanding of the user and then skip to later stages of communication of emotions, without addressing how this point was reached. The two HCD methods examined are not more important or fundamental according to Aristotle, because they are weak in the most important and fundamental early stages of design. They are useful in addressing later, more user-centric stages, which in a competitive design environment can give products an important edge. However, focusing on the HCD methods at the neglect of methods that address earlier stages is detrimental, according to poiesis. To receive the benefits of both approaches, designers should draw from both types of methods where appropriate. Determining how to combine traditional and HCD methods can be difficult, and doing so haphazardly risks omitting important considerations in the design process. Our research shows how these two types of methods can be combined more meticulously, using poiesis as a foundational design framework.
methods at realizing a particular principle from poiesis. We took a binary approach to examining the presence of different principles. Moreover, we determined whether a principle was present or absent from a design method but did not attempt to measure or evaluate how well a method achieves a certain principle. Also, although poiesis instructs about the relative importance of the elements, within each element the principles are viewed as equal. This makes it difficult to determine the criticality of leaving out a principle. We assumed that the principles had equal importance in the same element. Future research could evaluate the effectiveness of methods at different principles and the criticality of individual principles. We have taken an important first step toward understanding how poiesis can be related to engineering design and how it can be a useful basis for a comprehensive method that incorporates traditional and HCD methods. Comparing other engineering design methods to poiesis in the same manner would be worthwhile and could lead to more and better options for hybrid methods. One of the most important steps would be to more clearly define a hybrid method, such as the one proposed in our research, and validate its effectiveness. References 1. Aristotle. Aristotle’s Poetics. New York: Hill and Wang, 1961. 2. Halliwell S. “Aristotle’s aesthetics 1: Art and its pleasure,” in Aristotle’s Poetics. Chicago: University of Chicago Press, 1998. 3. Buchanan R, Margolin V. Discovering Design: Explorations in Design Studies. Chicago: University of Chicago Press, 1995. 4. Buchanan R. “Wicked Problems in Design Thinking.” Design Issues 1992;8:5–21. 5. Pahl G, Beitz W. Engineering Design. London: Springer, 1984. 6. Ulrich KT, Eppinger SD. Product Design and Development. New York: McGraw-Hill, 1995. 7. Cross N. Engineering Design Methods: Strategies for Product Design. 3rd ed. Chichester, UK: Wiley, 2000. 8. Jordan PW. Designing Pleasurable Products. London: Taylor and Francis, 2000. 9. Boatwright P, Cagan J. Built to Love. San Francisco: Berret-Koehler, 2010. 10. Norman D. “Emotion & attractive.” Interactions 2002;9:36–42.
Identifying these elements of poiesis as relevant to engineering design should encourage further incorporation and coordination of research in other design disciplines (such as architecture) with engineering design. For instance, the analysis we carried out to identify principles of poiesis in methods could also be carried out with design methods from other disciplines. This would indicate which elements have strong methods and techniques that can be brought over to engineering design to strengthen the practice. Future work also should evaluate the effectiveness of various Fall 2013 | Explorations
Smart Materials for Aneurysm Treatment By Jason Szafron
An intracranial saccular aneurysm (ISA) is a balloon-shaped protrusion that forms in the blood vessels of the head, a condition affecting roughly 3 million–6 million people across the United States.1 The force created by blood flow moving past weak places in vessel walls causes the wall to bow outward from the interior of the vessel, and eventually a rounded bulge forms in the side of the vessel wall. As time passes and the aneurysm grows, abnormal blood flow inside the aneurysm can lead to sudden rupture. With the rupture of an ISA comes the risk of hemorrhagic stroke, and though the potential for such an event is small, up to half of people so affected die, with survivors often physically impaired. Exact causes of an ISA are unclear; however, research has identified several associated risk factors, including hypertension, cigarette smoking, and being female.2 Background
The most common, least invasive way to treat aneurysms is to induce blood clot formation to prevent the aneurysm from rupturing. This is accomplished by using specific foams that retain a shape memory after heating and being deformed to a new geometry. Specific models of these foams are tested for their reliability to change into a particular shape, as the foam is an example of a deformable and malleable medium.
Treatment for ISAs typically involves sealing them off from the rest of the circulatory system, which prevents blood flow into the aneurysm body and reduces likelihood of rupture. In the past, the only option for treatment was brain surgery involving removal of part of the skull to clip the neck of the aneurysm; however, the many risks associated with surgery of this magnitude make this treatment course dangerously unfavorable. Recently, a less invasive endovascular approach has been used as a substitute. During an endovascular aneurysm procedure, a medical device is transported to the location of the aneurysm via a catheter inside the patient’s blood vessels. This procedure requires only a small incision into the leg to access a vein or artery, which offers
a highway to any area of the body. The most commonly used U.S. Food and Drug Administration–approved endovascular treatment of aneurysms involves inserting metallic coils into the aneurysm body to slow blood flow within the aneurysm.3 This stagnation of blood flow tends to engage the body’s natural clotting factors, causing a blood clot to form within the aneurysm. Though a negative stigma is attached to blood clots, these form only at the metallic coils within the aneurysm. Thus, the aneurysm body is filled with a blood clot, isolating it from the rest of the vessel and preventing future complications. These metallic coils are called embolic coils because they induce clot formation. Despite the novelty of this treatment, such coils have several limitations, including the compaction of coils over time along with the inability to fill large aneurysms with coils. Both issues lead to residual flow into the aneurysm due to formation of smaller clots. Also, the tendency of the embolic coils to protrude from the aneurysm into the attached blood vessel could cause clot formation that blocks vital blood flow to the brain. Other Options To address these shortcomings, my lab group is developing a new endovascular treatment for aneurysms. Our approach involves inserting shape memory polymer (SMP) foams into the aneurysm body instead of embolic coils.4 SMP foams are constructed by chemically bonding various types of polymers (plastics) together to obtain a set of particular properties suitable for an application. Different polymer chains can be combined in different quantities to optimize desired properties. The SMP foams are termed “shape memory” because they
Figure 1: Expansion of a crimped SMP foam cylinder as it is placed into a water bath heated above its transition temperature. (Used with permission of John Wiley and Sons, J. Polym. Sci. B Polym. Phys. 2012;50:724–737.)
Explorations | Fall 2013
can be heated above a certain We created a transition closed-flow loop temperature by setting up (Tg), deformed interconnected to a new shape, tubes with a and then cooled motor and below their Tg a pump that while retaining provided steady the new shape. fluid motion. A Subsequent fluid reservoir heating above offered us a Tg returns the location to Figure 2: (A) Catheter being moved into neck of aneurysm. (B) Deployment of SMP foam to its manually coils through catheter into aneurysm body. Stage 1 shows deployment of device into original shape, record, using aneurysm body through an intra-arterial catheter. Stage 2 illustrates expansion of foam as if the material a graduated to fill aneurysm followed by retraction of delivery wire and catheter from patient’s blood has memorized cylinder and vessels. Stage 3 depicts desired long-term outcome, in which aneurysm is isolated from its original a stopwatch, the rest of the vasculature with minimal chance of rupture, and the damaged vessel wall shape. By this the volume across the neck of the aneurysm has begun healing. (C) Final filled aneurysm after all procedure, a passing through coils have been deployed. (Used with permission of Johnson Matthey Plc., Platinum Metals Rev. 2011;55:98–107.) sample of SMP the system in foam can be a given time, crimped to a from which we formation at a faster rate is usually diameter small enough to be delivered desirable because healing can occur could calculate the velocity of the fluid by catheter to the site of an aneurysm for use in the FHDD equation. We more rapidly once an aneurysm is and then be heated above its Tg to converted from flow passing through isolated from its adjoining blood expand the SMP foam sample and vessel. To compare, we determined the the system to velocity by dividing fill the aneurysm. Figure 1 shows a the flow volume per unit time by the same two material properties for both crimped SMP foam expanding. cross-sectional area through which SMP foams and copper coils imitating the geometry of embolic coils. We used the flow was passing. To switch out To avoid underfilling in the large these mock embolic coils as a standard the sample being measured and allow aneurysms, where embolic coils measurement of the pressure gradient, to decide whether the material often fail, SMP foam samples can be we constructed plastic chambers for properties of the SMP foam could be oversized up to 1.5 times the diameter the SMP foams and mock embolic considered favorable enough to serve of the aneurysm to be filled while coils. Inlets for connection of pressure as a suitable treatment.
force created by blood flow moving past weak places in vessel walls causes the wall to bow...”
still outputting less than 10% of the force needed to cause the aneurysm to burst.5 Together, these factors indicate that the SMP foam treatment may eliminate several prominent issues in embolic coil delivery. Figure 2 shows the procedure for embolic coil treatment and the suggested procedure for the SMP foam treatment. The real test of an alternative SMP foam treatment method is how well the SMP foams induce clot formation in the aneurysm. Our experiment sought to determine a key set of material properties, which will allow us to evaluate how much the shape of our SMP foams will reduce blood flow within the treated aneurysm. Examining the decrease of fluid flow will allow us to predict the rate and amount of clot formation that will occur in the aneurysm. More clot
Methods SMP foams and mock embolic coils can be modeled as materials that contain small pores. These porous media can be found in nearly all aspects of life, such as cotton clothes, the filter in a coffee maker, or the sponge used to wash dishes. The material properties that determine fluid flow through porous media are permeability and form factor, which are found as part of mathematical coefficients in an equation known as the Forchheimer–Hazen-Dupuit– Darcy (FHDD) equation. Using the FHDD equation, we related these coefficients to the pressure drop per unit length across a material, also known as the pressure gradient, and the velocity of flow, quantities readily measureable when subjecting porous media to fluid flow in a closed system
transducers (pressure measurement devices that output a voltage corresponding to pressure) were located on each chamber to calculate the pressure gradient. We took pressure transducer voltage readings by using a data acquisition system and then fed these data into a Matlab program for further manipulation. We calculated the pressure gradient in the Matlab program by subtracting the pressure reading downstream of the sample from the pressure reading upstream of sample and then dividing by the distance between the two pressure transducers. We tested each sample at nine different flow rates, calculating the values for velocity and pressure gradient for each flow rate. Then, we implemented a least-squares fit (a system for fitting a curve to a set of data points) to the data of pressure gradient versus velocity to calculate Fall 2013 | Explorations
permeability and form factor values. Because we could control the pore size of the foams, the samples of SMP foams that we used for testing had different pore sizes, which leads to alternative material properties. The alternative material properties arise because a relationship exists between material geometry and the factors we were trying to calculate. Identifying material properties of different-sized pores allowed future optimization of the foam geometry to offer the most favorable possible fluid dynamic conditions needed in any new application of SMP foam technology. Therefore, we used a large pore and a small pore sample size. Also, because different-sized aneurysms can be packed with embolic coils to various degrees depending on the volume of the aneurysm and ease of access, we tested different packing densities for the mock embolic coils. This approach offered us insight into how densely the coils would need to be packed to have a certain effect on fluid flow in a treated aneurysm. We used four different packing densities to give a range of effective data. Results and Conclusions For our SMP foams to reduce more flow and cause more blood clot formation, the permeability and form factor values for the SMP foams in relation to the mock embolic coils needed to be lower and higher, respectively. Our testing revealed that the SMP foams had permeability values 10 times lower than the mock
References 1. Burns JD, Huston J, Layton KF, et al. Intracranial aneurysm enlargement on serial magnetic resonance angiography frequency and risk factors. Stroke 2009;40:406–411. 2. Juvela S, Poussa K, Porras M. Factors affecting formation and growth of intracranial
Explorations | Fall 2013
embolic coils for all clinically relevant cases. Form factors were 1,000 times higher for the SMP foams than for the mock embolic coils in all cases. These findings suggest that, compared with embolic coils, the SMP foam geometry produces a better environment to form a blood clot. Furthermore, this environment could offer substantially better filling of an aneurysm body, giving an SMP foam device the edge in successful treatment. The larger-pore-sized foams had higher permeability values and lower form factors than foams with smaller pores. From these data, we concluded that the smaller-pore-sized foams have material properties that would produce the most flow stagnation in aneurysm treatment. Therefore, smaller-pore-sized foams will probably induce the most clot formation and provide the best filling of an aneurysm. The permeability values of the mock embolic coils also decreased as packing density increased, and the form factors of the mock embolic coils increased as packing density increased. This finding suggests that in large aneurysms, where high packing densities cannot be achieved, the amount of residual flow is high. With a high residual flow, not as much clot formation is facilitated. A lack of clot formation could then mean increased future risk of aneurysm rupture, indicating that in such cases, SMP foam treatment may succeed where embolic coils would fail.
aneurysms: A long-term followup study. Stroke 2001;32:485–491. 3. Brilstra EH, Rinkel GJ, van der Graaf Y, et al. Treatment of intracranial aneurysms by embolization with coils. Stroke 1999;30:470–476. 4. Maitland DJ, Small W 4th, Ortega JM, et al. Prototype laser-activated shape memory polymer foam device for
Applications Though the implications of these data for using SMP foams as an aneurysm treatment option are important, the scope of our project can be considered on a wider basis. By using data obtained for permeability and form factor for different-pore-sized SMP foams, we will be able to customize the material properties of an SMP foam for a specific application. For example, in applications where blood flow needs to be reestablished in a blocked artery, the blood flow reduction properties of our aneurysm treatment foam are less desirable. Using SMP foam on a device meant to treat such a condition would then mean creating foam with a particularly high permeability and low form factor, which, from the results of this study, could be accomplished with a very-large-pore-sized foam. The potential uses of these alterable material properties are nearly endless. In fact, the ability to modify the material properties of SMP foams, in conjunction with the ability to deliver them noninvasively by catheter, would make them powerful tools for inclusion in many medical devices. Acknowledgments I thank Andrea Muschenborn for mentorship. I also thank Dr. Duncan Maitland for assistance with editing and general support. The National Institutes of Health/National Institute of Biomedical Imaging and Bioengineering funded this work (grant no. R01EB000462).
embolic treatment of aneurysms. Journal of Biomedical Optics 2007;12(3):030504. doi:10.1117/1.2743983. 5. Hwang W, Volk BL, Akberali F, et al. Estimation of aneurysm wall stresses created by treatment with a shape memory polymer foam device. Biomechanics and Modeling in Mechanobiology 2012;11:715–729.
Genetic Factors Associated with Coat Color and Health in White Tigers
By Sara Carney
White tigers are a shrinking form of diversity in tigers, which exist naturally in the wild. Nevertheless, there is an incorrect stigma that they must be inbred to be bred at all, which would decrease overall fitness. This research examines the genes that cause white pigmentation in the tiger as opposed to the normal orange and the effect that the white tiger has on the diversity and fitness of the various tiger species. Introduction For most of us, the closest encounter we have with tigers is in a zoo or sanctuary. These educational centers allow us to develop a deep respect for these powerful cats, yet this only gives us a snapshot of the complexity and diversity of tigers. One such example of tiger diversity is the white tiger. White tigers have become increasingly popular within captive breeding programs. Despite this popularity, controversy has surrounded the breeding of white tigers. Critics argue against inbreeding tigers to achieve the white coat color, which can lead to health issues such as cleft palates, crossed eyes, and neurological defects. Furthermore, some believe that the white tiger has no place in conservation efforts and should no longer be bred. The Association of Zoos and Aquariums has implemented this opinion into their species survival strategy for tigers. This means that any institution seeking accreditation from the Association of Zoos and Aquariums must pledge to not breed white tigers. Though this case against white tigers may appear to be in the best interest of the tigers, it ignores the fact that inbreeding is not necessary to breed white tigers and that their white coat represents a natural and valuable component of diversity within the species. White tigers were reportedly among the wild populations of India, many of which were exhaustively hunted. The first white tiger was brought into captivity in 1951 in Rewa, India, (presentday Madhya Pradesh) and was named Mohan.1 Although Mohan was first bred to an orange tiger in hope of perpetuating
the white coat, no offspring were white. Mohan was then bred to one of his daughters produced from the first mating, which resulted in both orange and white tigers.1 These early captive breeding attempts used inbreeding to preserve the white coat color. It was quickly discovered that the trait is inherited in a Mendelian recessive fashion—that is, each parent must have a copy of the allele for offspring to possibly inherit it. Then the cub has a 25% chance of inheriting that trait. To prevent inbreeding depression (the decrease in fitness due to inbreeding), white tigers have since been bred to unrelated orange tigers. Though many people feel that white tigers do not belong as part of tiger conservation, other people recognize the value of preserving tiger diversity. The white color originated in the wild and may be a valuable characteristic within the tiger population in the future. From a conservation point of view, sustaining all natural variants of a species is important. Today only about 3,000 tigers are left in the wild—much less than their historic population estimate of 100,000.2 Three of the original eight subspecies have gone extinct.3 Furthermore, the tiger’s habitat is becoming increasingly fragmented,2 making it difficult for wild tigers to maintain healthy levels of genetic diversity. Consequently, rare characteristics such as the white coat are easily lost. This is the result of genetic drift, in which gene frequencies
Photo by Amanda Flores
change within a population over time because of a variety of factors. To conserve the entire species, preserving all genetic diversity that we can before it is lost forever is crucial. This research project seeks to determine what genes cause the white coat phenotype in tigers, explore the possible genetic contribution to health disorders, and determine white tigers’ level of genetic difference with their orange counterparts. With this information, a management plan can be implemented that selects for healthy, genetically sound tigers. In knowing what gene(s) causes the white coat, breeders could test tigers for the presence of the allele. Doing so would make it easier for breeders to breed tigers responsibly rather than resorting to inbreeding to preserve the trait. Candidate Genes for White Pigmentation The white tigers’ coat color results from a mutation of the melanocytes, pigment-producing cells, that affects the production of pheomelanin, a pigment that normally leads to the orange pigmentation more commonly seen in tigers.4 This mutation
“Today only about 3,000 tigers are left in the wild...”
Fall 2013 | Explorations
“White tigers...among the wild populations of India... were exhaustively hunted.”
causes the normally orange segments of the coat to lack pigment and appear white. However, a different pigment, known as eumelanin, causes the dark stripes.4 Because the mutation does not affect eumelanin production as it impairs pheomelanin production, the dark stripes remain.
When selecting candidate genes for this project, we reviewed cases in which mutations were present and similar to those found in the white tiger. On the basis of this research, we determined that the agouti signaling protein (ASIP) and melanocortin 1 receptor (MC1R) were the most likely candidates. These two genes work antagonistically in producing melanin. Loss of function of MC1R or gain of function of ASIP could reduce pigment.4 Kermode bears, or “spirit bears,” Ursus americanus kermodei, are a type of American black bear with a recessive mutation at MC1R that makes them appear white.5 Like the tiger, these bears are not albinos. Mutations of MC1R occur in several feline species as well. Melanism, increased dark pigmentation of the skin and fur, is fairly common in jaguars and jaguarundis and is caused by a mutation in MC1R.6 ASIP also affects coat color in mammals. Black horses have a recessive deletion within ASIP that causes them to be melanistic.7 Genetic Diversity
we examined segments of DNA, known as microsatellites, that vary between individuals. Microsatellites occur within the studied segments of DNA and consist of di-, tri, or tetranucleotide repeats, for example, ATATATATATA. By looking at the alleles in microsatellites, we can estimate whether an individual is homozygous or heterozygous at that area. When multiple individuals are analyzed at many microsatellites, we can then estimate the overall heterozygosity of the individual and group.
genes. Primers must be designed, and then DNA is amplified with PCR and finally genotyped with the ABI 3730 DNA analyzer. These data are interpreted with the GeneMapper software. The primers used contain a fluorescently labeled dye, allowing us to view the amplified microsatellite in the software and to estimate its length in base pairs. We then viewed each sample individually and determined it to be either heterozygous, possessing two different alleles, or homozygous, possessing two copies of the same allele. We determined the heterozygosity at each DNA segment of interest, or locus, for both white and orange tigers by using the allele frequencies within the studied samples.
Results and Discussion
We sequenced the candidate genes in both white and orange tigers and compared the resulting sequences to determine whether the genes are responsible for the white coat color. To sequence these genes, we first designed primers. These primers are essential for the polymerase chain reaction (PCR), the method we used to make many copies of (amplify) our target DNA segment, so that we may more easily study it. In this reaction, primers aid in the synthesis of new DNA strands by selecting the appropriate segment for copying. Once PCR is complete, we cleaned out extra products formed during the reaction, leaving only the amplified DNA. We sequenced this final product by using the ABI 3730 DNA analyzer, and then we interpreted the results with Sequencher 4.7 software.
In our study of MC1R, we first compared the tiger to the domestic cat, Felis catus, finding 11 nucleotide substitutions between these two species. We also included a sample from a leopard, Panthera pardus, to compare with the tiger. The leopard had a six-nucleotide insertion (Figure 1) within MC1R that was not present in the tigers, as well as two nucleotide substitutions occurring prior to this insertion. These findings reflect the divergence of these species. The leopard and tiger are both members of the genus Panthera, meaning that these two species are more closely related to each other than to the cat, in genus Felis.
The main concern associated with inbreeding is loss of genetic diversity and combination of recessive We found no substantial difference deleterious alleles. When genetic between white tigers and orange tigers diversity is lost and deleterious alleles within the candidate genes. A causal become more common, they can mutation has not yet been found in easily combine in the same individual, Determining genetic diversity begins the portions of ASIP examined or in in the same way as sequencing leading to a disease phenotype, MC1R. Because the mutation has yet (observable trait). to be discovered, we will However, the presence continue our research by of other alleles in the finishing the sequencing population can mask the of ASIP as well as other harmful alleles. One way candidate genes. to quantify the level of genetic diversity within a We analyzed 12 population is to calculate microsatellites to heterozygosity— determine heterozygosity the proportion (Table 1). The of heterozygotes, microsatellites showed a individuals carrying heterozygosity of 0.761 two different alleles, in white tigers and 0.772 within the population. in orange tigers. Thus far To make this calculation, we have observed that Figure 1: Sequences from the leopard (top) and tiger (bottom), high39
Explorations | Fall 2013
lighting the six-nucleotide insertion in the leopard.
the heterozygosity exhibited by white tigers is slightly less than that of their orange counterparts; however, the difference is not statistically significant. Therefore, the white tigers in this study come from lineages that have been outbred to orange tigers to increase genetic diversity. Conclusion Thus far we have determined that the segments we have analyzed of our candidate genes, ASIP and MC1R, are not responsible for the white tiger’s coat. However, we still need to sequence one additional exon in ASIP. We will continue to seek the origin of this trait within this gene. If we cannot find a causal allele, we will begin assessing additional candidate genes. Through our microsatellite data we have determined that heterozygosity among white tigers is comparable to that of orange tigers. It appears that after the initial breeding of white tigers, they were outbred to other tiger lineages, increasing their diversity. To better understand these findings, we will continue examining genetic diversity at other genomic regions and include more individuals. Increasing our sample size will allow better inferences about the tiger population as a whole and the history of the white coat color. Once we have adequate data, we will be able to help establish a genetically based breeding program that maintains genetic diversity of tigers. Through our research, we intend to promote the health of tigers as individuals and as a species. Acknowledgments I thank the zoos, sanctuaries, and private owners who generously provided the samples that made this project possible. This project was funded by grants from T.I.G.E.R.S. and the Texas A&M Summer Program for Undergraduate Research (principal investigator, Dr. Jan E. Janecka). I also thank members of the Chowdhary lab for their assistance. I thank my friend Emilee Larkin, the first student associated with this project, for sparking my interest in the subject. Most important, I thank my adviser, Dr. Jan Janecka, for giving me the physical and mental tools necessary to complete this project and for making my first attempt at research enjoyable and enriching.
Standard tigers n = 11
Royal white tigers n = 8
Standard tigers n = 11
Royal white tigers n = 8
Na, mean number of alleles. SE 0.289 0.058 0.071 0.229 Ho, observed heterozygosity. He, expected heterozygosity. F, fixation index: (He – Ho)/He = 1 – (Ho/He).
Table 1: Panel of 12 microsatellites, showing number of alleles and heterozygosity at each locus
1. Thornton IW, Yeung KK, Sankhala KS. The genetics of the white tigers of Rewa. Journal of Zoology 1967;152:127–135. 2. Morell V. Can the wild tiger survive? Science 2007;317:1312– 1314. doi:10.1126/ science.317.5843.1312. 3. Luo SJ, Kim JH, Johnson WE, et al. Phylogeography and genetic ancestry of tigers (Panthera tigris). PLoS Biology 2004;2(12):2275–2293. 4. Barsh GS. The genetics of pigmentation: from fancy genes to complex traits. Trends in Genetics 1996;12:299–305.
5. Ritland K, Newton C, Marshall HD. Inheritance and population structure of the white-phased “Kermode” black bear. Current Biology 2001;11:1468–1472. 6. Eizirik E, Yuhki N, Johnson WE, et al. Molecular genetics and evolution of melanism in the cat family. Current Biology 2003;13:448–453. 7. Rieder S, Taourit S, Mariat D, et al. Mutations in the agouti (ASIP), the extension (MC1R), and the brown (TYRP1) loci and their association to coat color phenotype in horse (Equus cabaluus). Mammalian Genome 2001;12:450–455. Fall 2013 | Explorations
Only Human By Peter Wong
Photo by Annabelle Aymond
Some people cook. Some play sports. Did you know that some people even do math for fun? Anyway, my name is Peter Wong, and I do music. Growing up, I was immersed in music through my parents’ passion for Pink Floyd, Iz, and oppressively cheerful ’80s music. My mom made me practice guitar and later piano. Naturally, it was like pulling teeth— an unpleasant experience for all concerned. Though I certainly didn’t appreciate it at the time, I eventually learned to play and consequently to appreciate music on a deeper level. In such a musical environment, it was only a matter of time until I started to dabble in composition. I would be playing a piece, and a missed note would lend it an entirely different atmosphere. I would play around with it until I made up some new riff or just a derivative of the original melody. With my background in classical music, much of what I wrote could best be described as classical. However, I later began to explore new genres such as electronic, new age, and rock. As my musical experience broadened, I started composing more and more. Eventually, I began to compile riffs into songs and to fill in the gaps between them. Finally writing music prompted me to take it apart further. Every song I heard, I analyzed for timing, harmony, and atmosphere. Listening mostly to instrumental pieces, I observed the interplay between different musical voices and melodies, punctuated and accented by timing. I tried to see each piece as a lesson and became much more aware of music in general.
One of the most striking things I noticed was how far music could influence our emotions. Have you ever watched a movie without music? Played a musicless videogame? After a while, our interest wanes, and it becomes difficult to empathize with the characters. If music can complement thoughts and emotions so well, perhaps it can also inspire them. With that in mind, I started to compose pieces around an idea, image, or scene. I wrote “Only Human,” a piano composition, after examining Carl Sagan’s poignant philosophies about the frailty and potential of humanity. Despite all our shortcomings, we can do amazing things. How profound, that the species that can hardly manage to feed itself can launch tons of equipment into space regularly—even put a person on the moon. It gives me hope to know that even though we may not all be heroes, we can still accomplish great things collectively or even as individuals. World hunger, world peace, environmental sustainability: we may have a way to go, but eventually, humanity will get it right. I tried to capture this sentiment in “Only Human,” using an array of tempos and tones. The piece begins with a pensive, somber tone and melody, progressing into a faster-moving beat with a more hopeful harmony. The crescendo lingers between the upbeat and solemn keys, eventually settling back to a slower, introspective derivative of the first section.
“In such a musical
environment, it was only a matter of time until I started to dabble in composition.”
Explorations | Fall 2013
We’re not there yet, but we may be . . . .
Scan the QR Code below to listen to “Only Human” by Peter Wong!
Or, listen by visiting: soundcloud.com/explorationstexasam/only-human
Photo by Annabelle Aymond
Fall 2013 | Explorations
Q & A with
President Loftin: Why the Bow Tie? Editors Matt and Madeline sit down with President Loftin to discuss the story behind his most notorious accessory.
Photo by Matthew Jai
So Dr. Loftin, how many bow ties do you own? Well, I haven’t counted lately; it’s over 300.
How did you begin wearing the bow tie? Well, this was late ’70s. I was part of another university then, a junior member of the faculty, not tenured at the time, and our standard dress code for the males in the department was to wear slacks and an open-collared shirt, and no one looked different from anybody else. One day, the president sent a memo that directed the male faculty to begin wearing ties when they were teaching classes from that point on. My reaction was, “I’m not going to be told what to do.” Especially about my personal attire like that, and so I was going to show him by just wearing a bow tie. I fortunately had a colleague, who just passed away last year—I miss her a lot—Jean Umland, a very wonderful lady, and so she was married to a guy who wore bow ties, and so her husband gave me my first bow tie. I still have it: it’s the ugliest tie I have in my collection. A very ugly tie— it’s a hideous tie, really, but it’s one that I treasure because I know where it came from, and it was my very first one. And I began wearing bow ties from that point on, and, again, as I told you, it became a part of me and my persona, and a way in which I could ensure people remembered who I was, connect it to my name, and really help me market myself as an entrepreneurial faculty member who was seeking funding for his research and trying to advance his career and that of his students and colleagues. So it worked out well. 43
Explorations | Fall 2013
So it was really indicative of your presence? Exactly, I mean it was pretty unusual, and that was part of the marketing. Showing up at different places, you’re able to meet with individuals who can make decisions about government funding, for example, for your research, so it was a pretty important thing to be able to be recognized by them, to be able to walk up to them and greet them again, and have them say, “Oh yeah, good to see you again, Bowen,” and be able to start a conversation again. It’s not like it has a major impact; it’s probably a small, subtle thing, but you want every advantage you can get. And now I autograph quite a few, and one of the things that’s kind of big right now is an autographed bow tie in a silent auction. Although this is not a bow tie thing, but the record I think is right here though: I have my own bobblehead. There were only four of these made, OK. One of these was auctioned off, and it raised about $650 for scholarships. That’s a pretty good amount of money for a bobblehead!
Back to bow ties: how do you feel the bow tie has affected your personal and professional success overall? Well, that’s hard to say how that one works. It is who I am; I mean, I think all people develop some degree of style during their lifetimes . . . and it may be very subtle or it may not be very subtle, depending on who you are. In the sciences, it’s common to look pretty casual. I mean, I know lots of Nobel laureates, in physics and chemistry, for example, who would rarely wear a tie—they’ve achieved a lot, they received a lot of recognition, and they don’t need any more, basically. So they want to be comfortable. Consequently, you’ll find a lot of people, even at fairly professional venues, conferences, things like that, dressing fairly casually. And I’ve always been a guy who’s worn a professional suit and tie during that part of my life, now every day as well. So I wear a suit and tie 7 days a week: there’s no down time in my job.
One of the current commentators on Aggie Sports, Bill Liucci, has an even smaller one [bobblehead]! He really hates that small one.
Fall 2013 | Explorations
Photo by Matthew Jai
Wa l k i ng a Fin e L ine 45
Explorations | Fall 2013
hotography is a domineering passion of mine. It takes over my soul when I least expect it. I am a biomedical engineer, but my mind also requires a creative outlet. Amid all the integrals, differential equations, and steady-state diffusion, I can still escape to an alternate reality that I like to explore in my free time. Expressing myself in words is difficult sometimes, be they spoken or written. Words are too controlling. They say one thing when you want them to say another. However, photography is different. There is no right or wrong way to look at a photo. That is why this style of expression fascinates me so much. Different people have different backgrounds and experiences that they can pull from. We draw from our own lives whenever we look at something new. We think of everything that has happened to us, grasping for some area of similarity to which we can compare it. Photography just recently stole my attention, even though it has been lingering in my life for a long time, waiting for the right moment. I like to think that photography has always been in my genes. My grandfather has been a photographer ever since I can remember and long before I was born. I always enjoyed looking at his photos on my grandparents’ old slide projector at their house. My fondest memories of him are of him with a camera in his hand. I took this compilation of photographs in Rome and Venice, Italy, as well as Paris, France. Last summer, I studied abroad in Barcelona, Spain, for one month, and I traveled to Paris for one weekend and Italy for one week. This trip to Barcelona rejuvenated my passion for
photography and inspired me to share my photographs with others, such as through this journal. I altered the pieces with black-andwhite coloring and sepia. The effect these alterations have on the photos is quite astonishing in my opinion. Don’t get me wrong, the images and scenery were beautiful beforehand, but once the coloring was changed, something within the photo changed as well. They became more real. The title of my favorite piece in the collection is “Walking a Fine Line.” It was photographed in Venice, Italy. I was strolling through the narrow streets of Venice with my friend after having just bought gelato, and we suddenly emerged into an opening with a beautiful view of the water. Directly ahead of us was a man so full of life that I knew I had to capture this moment. You can say that the man was literally walking a fine line by unsteadily and shakily placing one foot in front of the other on that narrow rope. However, the picture leaves so much unexplained and in need of interpretation. The man could be doing anything else at that moment. He could have been the person who sold the gelato; he could have been the man who steered the gondola that transported me to that place—and yet he isn’t. He is a man, on a tightrope, because that is his life. It is his calling. He does not question what he was tasked to do; he just does it. It captures the strength that society needs, to carry on and continue through life when nothing else is constant, when everything seems to be falling apart. This moment can teach the world to keep moving forward.
Sara Muldoon Fall 2013 | Explorations
Fungus Among Us: Hitting a Moving Target By Lauren Puckett Treatment of fungal infections is a critical problem for medicine today. Fungi are also difficult to treat because, like humans, their cells are eukaryotic, and they can rapidly develop genetic mechanisms to render current treatments ineffective. By studying spore formation in various species of fungi, it is hoped that fungal reproduction may be better understood, allowing for future treatments to target spore formation of wide varieties of fungi as a novel therapeutic approach. INTRODUCTION Fungi are a misunderstood group of organisms. On one hand, they give us foods such as beer, cheese, and soy sauce. They also produce antibiotics such as penicillin. On the other hand, some fungi treat us as their food, causing infections such as ringworm, athlete’s foot, and meningitis, a hard to treat and life-threatening illness. Few treatments are available for fungal diseases. Currently, the species or family of fungi causing a specific infection is not important; doctors use the same few agents to treat all fungal infections. The issue is that fungi are composed of eukaryotic cells, the same type of cells that make up animals. Therefore, drugs used to treat and destroy fungi also commonly affect the host organism. Thus, researchers must exploit the slight differences between animals and fungi to find an effective treatment. One difference is a cholesterol analog called ergosterol, which is present in the fungal cell membrane but not in the membranes of animal cells.1 However, treatments targeting ergosterol can have side effects on the host. Animal cells also use a cholesterol derivative in their membranes, which the drug can mistakenly target. Because these treatments can damage the patient as well as the fungus, fungal infections are a serious and potentially fatal problem in people with compromised immune systems, such as AIDS patients. Fungal diseases are also a major problem in agriculture, where billions of dollars are spent around the world every year to combat emerging fungal pathogens in plants, animals, and humans.2
quickly evolve resistances and immunities to common medications. Rapid reproduction allows the fungus to quickly overcome adverse conditions. Because fungi can reproduce relatively easily, any new changes to their genetic code, beneficial or not, are passed on to offspring. This potential for rapid changes in genetics does not bode well for current medicinal techniques.
Developing a more effective, specialized treatment for fungal diseases requires two things: 1. Determine whether all pathogenic fungi targeted produce spores the same way, allowing them to be susceptible to the new treatment 2. Identify an agent that targets that mechanism of spore formation, thereby blocking the fungi from reproducing and spreading
fungal reproduction would also solve the problem of treatments that harm humans as well as fungi.”
Another issue with treating fungal diseases is the ability of fungi to 47 Explorations | Fall 2013
But some hope remains. For genetic analysis of any species, understanding how that organism reproduces is important. Doing so promotes our understanding—and thus control—of the flow of genetic material. Control of reproduction is vital because it means that resistant species cannot pass this trait on to their offspring, allowing drug and treatment methods to remain viable longer. Limiting fungal reproduction would also solve the problem of treatments that harm humans as well as fungi. Because human cells do not make spores, a treatment that targets spore formation would have no detrimental effect on the patient. Fungi in particular rely almost exclusively on spore production to reproduce, so studying how to slow or inhibit spore formation is important. Controlling spore formation makes the resistance or pathogenicity of the species irrelevant because the fungus has no way of maintaining its own life or producing viable offspring that share its dangerous traits.2
Fungi form spores through eight distinct methods.3,4 Historically, scientists believed that a method of spore formation is common throughout a taxonomic order, starting from a common ancestor.5 For this to be true, all species within a given taxonomic order should share the same strategy and would therefore be susceptible to a disease treatment that prevents spores from forming through a given mechanism. Figure 1 shows the expected inheritance scenario according to this assertion. To explore this theory, I examined four different species of fungi to determine their methods of spore formation: 1. Alternaria brassicicola 2. Common Curvularia spp. 3. Common Cladosporium spp. 4. Thielaviopsis basicola Fungi can create sexual and asexual spores. Asexual spores are called conidia and were the focus of my work. I analyzed conidium formation by using microscopic time-lapse imaging to observe both growth and formation. I then classified the observed method of forming conidia as one of the eight existing methods and compared it with the formation method ascribed to that taxonomic family of the fungus.
RESULTS AND DISCUSSION The time-lapsed sequence in Figure 2 demonstrates development of fully mature conidia of T. basicola.
Common Ancestor C
Common Ancestor D
Of the four species of fungi examined in this study, T. basicola formed conidia in a way that did not agree with its taxonomic Second Generation order, thereby contradicting the assumption that all members of one taxonomic family form spores Third Generation the same way. T. basicola, a soil pathogen that infects more than Figure 1: Ancestry tree 1. Presumed evolutionary lineage of spore formation in separate 100 species of plants across 33 families of fungi. different families, produced two types of conidia.6 T. basicola is a any other members of the taxonomic Thus, all fungi within the order that T. member of the order Microascales family. Where or how did this new basicola belongs to can use this second and produces both aleuriospores and method come to be? Two likely form of spore formation. If this were endospores.5 Because T. basicola uses causes of this second method of spore true, the logic behind why it has not an additional method of conidiation, formation are environmental stresses been observed in the other members of we can conclude that production of and inheritance. the order would be that they have not conidia has arisen more than once needed this second mechanism (Figure in the evolution of T. basicola—and Although serious sounding, 4). presumably within the organism’s environmental stresses do not order, Microascales.7 This conclusion Other options include expression contravenes the previous assertion that have to be so dire. The unfavorable of a gene acquired from a different all members of a family share the same environmental conditions just have fungus. Because T. basicola has method of conidiation. Figure 3 shows to be different enough from the organism’s natural environment that shown a second method of spore our results. they hinder or alter normal growth formation, other fungi could too. If so, and development. Fungi exist in a a mycosis treatment focusing on one IMPACT constant state of balance, and their mechanism of reproduction could be energy intake must be high enough less effective. Fungi are a constantly Ideally, starting from a common to sustain a certain level of growth changing population, as indicated ancestor, all fungi within a related and spore production. When an by the new, resistant strains of fungi taxonomic order should share the same method of conidiation (Figure 1). unfavorable condition limits a fungus’s that have emerged. New drugs or treatments developed to target spore If this assertion were true, medications ability to obtain and produce energy, it will try to survive by adapting. Thus, formation should be made so that and treatments could be developed a second form of spore formation they address multiple or all known for each of the eight methods of could be a response to environmental methods of spore formation. This spore formation, allowing treatment stress present during its growth. A approach ensures that even if a fungus of infection with an entire family of similar situation occurs with allergies acquires a new method of conidiation, fungi. However, data collected from in humans. A person born with the treatment method would remain T. basicola prove that such is not the wheat allergies who lives in a country viable. case. Many fungi have both asexual where wheat is not a part of the main and sexual methods of reproduction. My results also have important T. basicola uses two asexual production diet shows no symptoms. However, once the person is exposed to wheat implications for our understanding methods to generate spores. The in another country or situation, the of fungal evolution. One of the four method T. basicola uses is unique allergy symptoms are expressed. species studied has diverged from the in another way: one of its asexual other members of the taxonomic order reproduction methods is not present in
Figure 2: Conidium formation in T. basicola. Selected frames of the formation of conidia (see also Movie 1 in Supporting Information). At 00:00:30, two conidia have already formed. The new spore begins to emerge at 00:12:30. Asterisk indicates the initiation of conidium formation. By 00:30:00 the spore has completely emerged, and a new conidium begins to emerge at 00:37:30. A third conidium is visible at the bottom of the frame, beginning formation at 00:21:30 and reaching completion at 00:49:30.
Fall 2013 | Explorations
Common Ancestor C
Common Ancestor D
Figure 3: Ancestry tree 2. Observed evolutionary lineage of spore formation on the basis of experimental data from T. basicola. Common Ancestor C
Common Ancestor D
Figure 4: Ancestry tree 3. New evolutionary lineage of spore formation in separate families of fungi, assuming inheritance through a common ancestor.
by producing a second type of spore. Therefore, many other species probably show a similar affinity for evolving new methods of spore formation. THE FUTURE The research reported here has only begun to explore a diverse, complex group of organisms. With so many unidentified and unclassified species, the need for more research is crucial to gain an understanding of this unique kingdom of life. The asexual
REFERENCES 1. Ellis D. Amphotericin B: Spectrum and resistance. Journal of Antimicrobial Chemotherapy 2002;49(suppl 1): 7–10. 2. Ma LJ, van der Does HC, Borkovich KA, et al. Comparative genomics reveals mobile pathogenicity chromosomes in Fusarium. Nature 2010;464:367– 373. doi:10.1038/nature08850. 3. Kendrick B. The Fifth Kingdom, 49
Explorations | Fall 2013
reproduction through spore formation observed in this study could be due either to inheritance under proper environmental conditions or to a mechanism new to the taxonomic order, acquired from a separate group of fungi. These hypotheses could also support each other; because of environmental stresses, certain methods of spore formation may be more favorable, leading to the fungus acquiring and expressing a new method from another fungus to cope with the stress. However, if
pp. 7–54. 2nd ed. Newburyport, Mass: Mycologue Publications, 1992. 4. Hughes SJ. Conidiophores, conidia, and classification. Canadian Journal of Botany 1953;31:577–659. 5. Seifert KA, Gams W. The taxonomy of anamorphic fungi, pp. 307–347. In The Mycota, vol. 7A. Heidelberg, Germany: Springer Berlin Heidelberg, 2001. doi:10.1007/978-3-662-10376-
the occurrence of this second method has arisen genetically, all fungi within the same taxonomic order could call on that method of spore formation should the need arise. If the second hypothesis were true, the need to understand the common ancestor would become as important as the need to understand the modern-day descendants. Research into ancestral fungi and common traits could also allow researchers to gain a better understanding of the types of fungal traits most likely to develop resistance to medication, thereby accelerating the search for an agent that can combat them. Fungal infections and their treatment are some of the largest problems in medicine today. Treatments for these infections are limited because of two factors: 1. Toxic effects are associated with these treatments, which attack components of fungi that are similar to those found in the human body, also damaging the host patient. 2. Fungi have shown genetic adeptness and quickly develop resistance to these treatments.
However, by learning more about fungi and their reproductive methods, we come closer to developing an agent to cure all fungal infection. ACKNOWLEDGEMENTS I thank my research adviser, Dr. Brian Shaw, for his expertise and assistance.
0_14. 6. Shew HD, Lucas GB, eds. Compendium of Tobacco Diseases. St. Paul, Minn.: APS Press, 1991. 7. Paulin-Mahady AE, Harrington TC, McNew D. Phylogenetic and taxonomic evaluation of Chalara, Chalaropsis, and Thielaviopsis anamorphs associated with Ceratocystis. Mycologia 2002;94:62–72.
Using Mutant Mice to Understand Seizures By Vivek Karun BACKGROUND Epilepsies, better known as seizure disorders, are one of the most common classes of nervous system disorders, along with migraines and strokes. They affect millions of people of all ages worldwide. They are so prevalent, in fact, that onethird of adults either know someone with seizures or have seizures themselves.1 They can have many symptoms, from the recognizable muscle spasms and loss of consciousness to less apparent symptoms, including blank stares and lip smacking. Seizures occur with little to no warning and can last anywhere from a few seconds or minutes to around an hour. Unfortunately, reliable cures for any type of seizure have yet to be developed. The lack of a cure is due primarily to the fact that, despite its high prevalence and major advances in treatment, epilepsy is among the least-understood disorders (http://www. epilepsy.com/epilepsy/main_epilepsy). Thus, a clear understanding on the cellular and molecular level is crucial for developing effective treatments to cure, rather than simply alleviate, seizures. Our study’s results uncover one aspect of the biological problem behind these disorders that may suggest a future remedy for seizures.
Epilepsy is not a completely characterized disorder. There is some evidence that abnormal structure of a particular type of nerve cell, known as a Purkinje cell, may impede the function of that cell and, at least in part, cause epilepsy. By examining mice with various degrees of a structural abnormality, the researcher found an association based on the size and complexity of the nerve cells in each group of mice.
projections, similar to the branches of a tree, reach out from the soma to other neurons (Figure 1A). Dendrites receive signals from other neurons by forming junctions, called synapses, with axons from other neurons. Axons send signals from the soma by synapsing with the dendrites of other neurons. When a synapse is formed, a dendrite from one neuron and an axon from another do not physically connect but rather leave a very small space (20–40 nm) between them.1 During signal transmission between neurons, the axon sends a signal from its neuron by releasing chemicals known as neurotransmitters into the synapse. Neurotransmitters pass through the gap, or synaptic cleft, and bind to receptors at the dendrite of the receiving neuron. The binding of neurotransmitters to the
clear understanding ...is crucial for developing effective treatments to cure, rather than simply alleviate, seizures.”
The human nervous system consists of billions of neurons, or nerve cells, which communicate with each other electrochemically. The typical neuron has a central cell body, called the soma. Two types of projections emanate from the soma: dendrites and axons. These branched
Synapse axon synaptic vesicles synapse
Figure 1: (A) Basic structure of neurons (signal flow: leftmost dendrites → soma (cell body) → axon (and axon branches) → dendrites → soma → rightmost axon). (B) Close-up view of the synapse between an axon and a dendrite (signal flow: axon → dendrite).
receptors produces an electrical impulse in the receiving neuron that the soma receives and interprets (Figure 1B). The release of neurotransmitters is facilitated by the entry of calcium, through calcium-specific channels, into the axon of the neuron sending the signal. Calcium uptake by neurons is therefore crucial for signal transmission as well as neuronal growth.2 Seizures occur when neurons in the brain fire signals abnormally, causing a malfunction of the signaling system, which may briefly cause the unconsciousness or muscle contractions that occur during seizures. One of the most common forms of epilepsies is the absence seizure, common in children and characterized by blank stares; behavioral arrest, or “freezing”; and sometimes facial muscle spasms. These absence seizures typically last only a few seconds but can occur many times in a day.3 One potential cause of absence seizures in humans comes from mutations in calcium-specific channels that impair calcium entry into neurons.4 These mutations thus hinder the neuron’s ability to use calcium, which can substantially impair neuronal development and signaling. Some studies have also revealed damage to the cerebellum, the region of the brain that coordinates muscle movement, suggesting that these mutations are prevalent throughout this brain region. Therefore, mice with similar genetic mutations in calcium channels within their neurons could serve as animal models to experiment and help better understand the mechanisms of cerebellar disorders, such as epilepsy.5 Two types of mice, known as leaner and tottering mice, Fall 2013 | Explorations
of normal mice, with leaner mice showing a more severe reduction.
MATERIALS AND METHODS
SOMA AXON Figure 2: A Purkinje cell (stained black) as seen under the microscope. The soma and prominent treelike branching of dendrites are visible. Only a small part of the axon is visible here.
have spontaneous but distinct mutations that adversely affect neuronal calcium channels. This defect causes absence seizures that resemble those in humans. Both mutations reduce calcium entry into neurons, but to different degrees. Leaner mice carry the more severe mutation. Although both leaner and tottering mice exhibit absence seizures as well as overall loss of muscle coordination, known as cerebellar ataxia, the leaner mice exhibit more severe symptoms.4 Compared with normal mice, leaner mice undergo substantial loss of a type of neuron known as the Purkinje cell in the cerebellum, but tottering mice do not.6,7 Although loss of Purkinje cells alone in leaner mice could explain some clinical symptoms associated with these mutations, the fact that tottering mice do not lose Purkinje cells while still exhibiting symptoms suggests that Purkinje cell loss may not cause seizures. Another possibility would be that the existing Purkinje cells in leaner and tottering mice do not function correctly, leading to seizures. We set out to determine whether Purkinje cells could contribute to seizures by asking whether the Purkinje cells in leaner and tottering mice are normal or damaged. Because cell structure is highly linked to cell function, examining the structure of Purkinje cells in leaner and tottering mice makes sense. Abnormal structure could indicate that these neurons are functioning abnormally as well. We investigated the structural characteristics of Purkinje cells in these mice with the hypothesis that Purkinje cells in cerebella of leaner and tottering mice will show reduced structural complexity compared with those Average Total Dendritic Length by B Type
To evaluate structural complexity and development of each Purkinje cell, we measured total dendritic length, somatic area, and complexity of dendritic branching. We determined the latter by counting the dendritic branches that intersected defined concentric circles spaced 25 µm apart starting from the center of the soma (Sholl analysis).9 In this analysis, the more intersections that exist between dendritic branches and the concentric circles radiating from the soma, the more complex the neuron’s dendritic branching. For each mouse, we analyzed approximately 10 neurons and took the mean of the measurements. RESULTS After measuring Purkinje cells in the cerebella of all three mouse genotypes, we found clear differences in some parameter values between genotypes. Because the male and female values for each parameter were essentially equal, we averaged data from male and female mice within a genotype into one piece of data for that entire genotype. When we compared total dendritic length, Purkinje cells of tottering mice showed no statistically significant difference in length compared with those of normal mice, but Purkinje cells of leaner mice showed a statistically significant decrease in dendritic length compared with those of normal mice (Figure 3A). Purkinje cells of neither tottering nor leaner
Average Somatic Area by Type
3600 3400 3200 3000 2800 2600 2400 2200 2000
Average # of intersections
500 480 460 440 420 400
Average # of Intersections Per Radius Distance from Soma by Type
Average somatic area (μm2)
Average Total dendritic length (μm)
We tested six normal mice, six leaner mice, and six tottering mice. Each group included both male and female adults. We anesthetized and decapitated mice and then prepared their brains for study (isolating, storing them in a solution, and embedding them in wax blocks). We cut each brain into thin sections to allow observation of neurons within the cerebellum. Because neurons are indistinguishable from the rest of the brain tissue during microscopy, we applied drops of a dilute solution of ammonium hydroxide to each section to stain the neurons an intense black color. Purkinje cells have a characteristic complex treelike branching of dendrites more prominent than in other types of neurons (Figure 2), making Purkinje cells distinct from other types of neurons. The Golgi–Cox method randomly stains approximately 1% of all neurons.8 This phenomenon works to our advantage because with only a few neurons that stain, we can clearly see the branching patterns of each neuron.8
30 25 20
25 45 65
85 105 125 145 165 185 205 225 245 265 285 305 325 345
Radius distance from soma (μm)
Figure 3: (A) Mean total dendritic length for the three mouse genotypes. Asterisks indicate statistically significant difference between two values marked by bars on top. (B) Mean somatic area for the three mouse genotypes. No asterisks means that no statistically significant difference exists between values for any two of the three types. (C) Mean number of intersections of dendritic branches with defined concentric circles at different distances from the soma. Asterisks indicate statistically significant difference between values for leaner and normal (p < 0.05). One-way analysis of variance and Tukey post-hoc analysis were used for all data points.
Explorations | Fall 2013
mice exhibited statistically significant differences in somatic area compared with those in normal mice (Figure 3B). However, whereas tottering mice had similar values for the Sholl analysis compared with normal mice, leaner mice displayed statistically significantly decreased numbers of intersections of dendritic branches with concentric circles compared with normal mice for a specific range of circles, namely, those at between 65 and 165 µm (Figure 3C). CONCLUSIONS We carried out this structural study of cerebellar Purkinje cells to test our hypothesis that the leaner and tottering mice would both show reduced overall neuronal size and complexity compared with normal mice, as well as that leaner mice would be more severely affected than tottering mice. Our observations partially support, but also challenge, our hypothesis. Our hypothesis was partially supported because our data revealed that Purkinje cells in the leaner mouse cerebellum are markedly less developed and less complex than those observed in the normal mouse cerebellum, as indicated by the statistically significantly reduced total dendritic length and fewer intersections of dendritic branches in the Sholl analysis. However, Purkinje cells of tottering mice exhibited complexity and size unexpectedly similar to those of normal mice, which challenges our hypothesis about tottering mice. Any slight differences between tottering and normal mice values were not statistically significant. Another unforeseen finding was the similarity of somatic area values between the Purkinje cells of all three types of mice, which points more definitively to the structure of the dendrites as the fundamental difference between normal mice and leaner mice. The observations in this study offer new information related to fundamental problems underlying epilepsies
REFERENCES 1. Rozental R, Giaume C, Spray D. Gap junctions in the nervous system. Brain Research: Brain Research Reviews 2000;32:11–15. 2. Moosmang S, Kleppisch T, Wegener J, et al. Analysis of calcium channels by conditional mutagenesis. Handbook of Experimental Pharmacology 2007;178:469–490. 3. Porter R. The absence epilepsies. Epilepsia 1993;34(Suppl. 3):S42–S48. 4. Lau F, Frank T, Nahm S, Stoica G, Abbott L. Postnatal apoptosis in cerebellar granule cells of homozygous leaner (tgla/tgla) mice. Neurotoxicity Research 2004;6:267–280. 5. Heckroth J, Abbott L. Purkinje cell loss from alternating sagittal zones in the cerebellum of leaner
by indicating that structural differences in neurons in the cerebellum exist in mutant mice that exhibit seizures. This study thus advances our understanding of the source of not only absence seizures but also, perhaps, epilepsies in general. Because structure is linked to function, the absence seizures and cerebellar ataxia displayed by leaner mice may be related to the impaired neuronal development and dendritic complexity in the cerebellum. However, tottering mice did not show statistically significant differences in Purkinje cell structure even though they also exhibit absence seizures and cerebellar ataxia. This finding suggests that a degree of neuronal functional loss may be present that does not manifest in obvious changes in neuronal structure. This study is also valuable by indicating a possible clinical target for more effective therapy. It presents the possibility that if we could develop medicine that promotes growth of neurons to help abnormal Purkinje cells develop more normally, we may observe more clinically significant improvements in patients with seizures. In light of our findings, the most important question to ask for further study would be this: What actually caused the deficit in neuronal development that occurred in the leaner mice? Continued research that tackles this and more such questions is vital in combating seizure disorders and will help to unravel the mysteries behind these widespread disorders. ACKNOWLEDGEMENTS I thank my research adviser, Dr. Louise Abbott, not only for the opportunity to conduct this project but also for invaluable mentorship and expertise. I thank also Dr. Fikru Nigussie for encouragement and patience and for teaching me the procedures and techniques for this project as well as assisting in collecting and processing data. Finally, I thank Tanvir Ahmed for his help with cutting brain tissue.
mutant mice. Brain Research 1994;658:93–104. 6. Dove L, Abbott L, Griffith W. Whole-cell and singlechannel analysis of P-type calcium currents in cerebellar Purkinje cells of leaner mutant mice. Journal of Neuroscience 1998;18:7687–7699. 7. Wakamori M, Yamazaki K, Matsunodaira H, et al. (1998). Single tottering mutations responsible for the neuropathic phenotype of the P-type calcium channel. Journal of Biological Chemistry 273(52):34857–34867. 8. Pasternak J, Woolsey T. On the “selectivity” of the Golgi–Cox method. Journal of Comparative Neurology 1975;160:307–312. 9. Sholl DA. Dendritic organization in the neurons of the visual and motor cortices of the cat. Journal of Anatomy 1953;87:387–406.
Fall 2013 | Explorations
The Pop-Op Morphing Wall: A Fusion of Engineering and Art By William Whitten “Op-Art” was a twentieth century art movement designed to imply motion in art through optical illusion. Using a specific kind of alloy that can be “programmed” into particular formations, in combination with heat obtained as a result of electrical resistance, one can not only imply motion, but also see realized motion in art - in, for example, “The Pop-Op Morphing Wall.” Figure 1: The Pop-Op kinetic art installation.
As a result of an innovative collaboration between the Texas A&M departments of aerospace engineering and environmental design, a new tribute stands to the fusion of engineering and art. The Pop-Op is a 16-by-8foot art installation that features 36 intricately arranged morphing panels (Figure 1). These composite panels, which architecture students designed and fabricated, silently bend and twist to create an organic wavelike motion across the sculpture. This surprising effect is created through using advanced engineering materials that can deform the panels in a controlled manner. The Pop-Op’s unique assimilation of technology promotes the idea of morphing structures and presents intriguing possibilities for creating structures not limited to one spatial form. The morphing technology integrated in the Pop-Op has applications beyond a purely architectural or artistic function. Shape memory alloy (SMA) actuators are lightweight, high energy, and robust—features that make them particularly well suited for aerospace applications.1 Many applications for morphing structures have already been implemented, including variable-geometry chevrons on the front end of an aircraft’s thrust reverser sleeve that reduce engine noise, reconfigurable helicopter rotor blades that change shape in flight in response to changing conditions, and an F-15 shape-changing inlet to realize substantial gains in flight performance.1 Research at Texas A&M is focusing on using SMAs to create programmable self-folding active materials—structures that can morph and bend into any desired shape.2 Such surfaces would allow robots to change shape to navigate complex terrain, let furniture take on different configurations depending on its desired function, or permit vehicles to have variable geometry to accommodate varying aerodynamic conditions. 53
Explorations | Fall 2013
INTRODUCTION Darren Hartl, a Texas A&M research assistant professor of aerospace engineering, and Gabriel Esquivel, a TAMU assistant professor of architecture, collaborated to create the Pop-Op. Hartl researches a novel class of materials known as smart materials, which can change properties or behavior in response to applied stimuli. Esquivel specializes in advanced digital design and fabricating architectural components. The two professors envisioned a project that would combine the expertise of both research endeavors. The team was inspired by the concept of op art, a 20thcentury art movement that used optical illusion to create the impression of motion. The Pop-Op design also pays tribute to Frank Malina, a 1934 graduate of Texas A&M’s mechanical engineering program. In addition to cofounding and directing the Jet Propulsion Laboratory, Malina pioneered kinetic art, which uses movement as its key design principle.3 By creating a sculpture that not only implies motion but also exhibits real motion, the Pop-Op presents a new take on the role of mobility in architecture and art. By combining the architects’ design expertise with the engineers’ experience with advanced material behavior, the team hoped to create an installation that was performative (capable of morphing in response to functional or aesthetic needs) and effective (suggesting an idea—here, the exploration of motion). The design was also modeled after the ideals of intelligent architecture—structures with integrated sensors and computers that allow the work to take on exciting functional roles by responding to users or reacting to changing conditions.4
BACKGROUND To complement the implied motion with real motion, the Pop-Op used SMA wires to provide locomotion. SMAs are a specialized type of active material. These materials are “programmed” through a special process to “remember” a particular shape. At room temperature, the SMA material acts like a normal metal that can be bent or twisted as desired (similar to a paper clip). However, when an SMA component is heated, the material will bend itself back into its remembered shape,5 a process known as SMA actuation. Through careful programming, an SMA can achieve many types of motion. Heating SMA wires is commonly carried out through resistive heating, which is accomplished by applying voltage to a material. SMA wires contain a small but finite resistance to the flow of electricity. When electricity flows against this resistance, heat is released—much like the frictional heat produced from rubbing your hands together. The SMA wires used in the Pop-Op can morph in only one direction. Another force (known as the restoring force) must act on the SMA if it is to return to its original position after actuation. In the Pop-Op, SMA wires pull on the surface material to make it deform. Although the material is flexible, it is also stiff enough to resist these deformations. When the SMA wire is no longer receiving heat and is cooling down, the force it exerts on the surface material decreases. As this pulling force gradually diminishes, the surface material bends back into its original position. CONSTRUCTION Design The Pop-Op’s visual design was in itself a technical achievement. The final design was crafted through a parametric design process, which begins with a base image that emphasizes important design elements. For the Pop-
Op, these included typical op-art effects: a wide color palette, complex geometry, and illusions of motion. The base image was processed through filters tuned to produce the final design with the Grasshopper parametric design software. The team projected the 2-D digital design onto a 3-D surface to complete the process. This multicomponent 3-D surface consists of a large background element and many panels. The design calls for two primary types of panels: “static” panels and morphing “flowers.” The static panels have flaps cut into their surface (Figure 2). SMA wires configured in the shape of springs are attached to the back of the 28 flaps. When the SMA spring is actuated through heating, the flap is pulled away from the viewer. Upon cooling, the flap will bend back and once again become flush with the rest of the static panel. As seen in Figure 2, the aptly named flowers are cut into floral patterns. SMA wires are attached along the surface of each of the nine flowers so that the flowers bend toward the viewer when the SMA wires are actuated. After the heating cycle, the flowers will return to their original position. The immobile background element covers most of the sculpture and complements the design of panels. Material Fabrication Fabricating the structure presented several unusual challenges: • Each morphing structure must have enough flexibility to bend during the SMA wire contraction yet also be stiff enough to return to its original position at the end of the actuation cycle. • The structure as a whole must be lightweight so that it can be structurally supported. • The fabrication material must be nearly transparent so that the colorful design can be seen through layers of material.
Figure 2: Typical fabricated flower (left) and flap (right).
Fall 2013 | Explorations
which includes To meet these three the Arduino and constraints, the team related circuitry, tested several materials, runs autonomously ultimately choosing throughout the a fiber-reinforced day and turns off composite material to automatically after constitute the bulk of business hours. The the design. A fibermotion of the wall reinforced composite can be changed at any consists of fibers and time by modifying matrix. Much like the and uploading cloth fibers in a woven a program of garment, the fibers give instructions through the material structural a USB connection. stability and stiffness. An onboard LCD The matrix, a lightweight monitor displays the substance that “glues” status of the system the fibers together, and which panels are connects the fibers currently actuating. but contributes little to the strength of the RESULTS material. Together, the Figure 3: The Pop-Op control system installed on the back wall. fibers and matrix create The Pop-Op is light, strong composite on permanent display inside the H.R. Bright building materials with various interesting properties. On the Popon the Texas A&M campus. As one of the first attempts Op, the team used a fiber known as C-glass in conjunction at integrating SMA technology and automation into a with an epoxy resin matrix. The chosen C-glass fibers and composite architectural surface, the final product highlights matrix are nearly transparent. both the strengths and weaknesses of this combination. To create a substantial structure, layers of C-glass are laid The choice of the right composite material was crucial to the on top of one another, bound with the epoxy resin matrix. The resin is applied as a liquid but hardens after a few hours. success of the project and is an important outcome. Without the right blend of flexibility and stiffness, the panels could Depending on the component, up to four layers of C-glass not sustain any kind of cyclical motion. However, some were used. The digital design was printed on a large-format panels have shown signs of permanent deformation over printer. This large printout was inserted behind the final layer of C-glass and resin. Because the composite material is the current life of the Pop-Op (7 months as of this writing). This degradation limits the life span of the Pop-Op and nearly transparent, the colorful design is clearly visible. should be addressed in future applications. The Pop-Op consists of many separate pieces. Multiple The placement and configuration of the SMA wires also pieces were fabricated on one section of C-glass by using yielded important results for future work. The morphing the process previously outlined. A large-format computer panels exhibit a subtle, nearly organic motion. Although numerical control (CNC) machine was used to cut the useful in an artistic sense, having motion with greater speed complex geometry. A CNC machine uses a computer file and magnitude might be desirable for some applications. to automatically cut a material into virtually any shape. Modifying the SMA placement, alloy type, and voltage Using a file based on the digital design, the CNC machine along with the composite material type could result in precisely cut each component out of the bulk fiberglass these changes. Also, the original placement of the SMA material. wires on the flower structures put a large amount of stress on the wires. These wires broke after several weeks of Control System operation (the wires were replaced and reconfigured to prevent recurrence of this issue). Future work in this area Because the physical motion of the sculpture must may feature SMA wires embedded in the surface composite complement the Pop-Op’s static visual design, the team material to avoid these material failures. developed a flexible electronic control system (Figure 3) to precisely control motion of the panels by selectively DISCUSSION AND CONCLUSION applying voltage to each morphing structure at discrete intervals. In general, voltage is applied to a particular SMA Despite a few flaws, the Pop-Op was a successful proof of wire for approximately 4 seconds—enough time for the concept for the idea of morphing architectural surfaces. SMA wire to fully heat up and deform the panel as far as The subtle, controllable motion of the panels achieved possible. both the performative and effective design goals of the project. Thanks to a generous grant from the Academy for The “brain” of the control system is a product known as the Visual and Performing Arts at Texas A&M University, the Arduino. Arduino is an open-source hardware and software electronics prototyping platform. The Arduino has a new shape-memory–based art installation is under construction. Learning from the strengths and weaknesses been successfully used on similar shape-memory–based of the Pop-Op, we hope that this new work will spark even art installations in recent years.6 The entire control system, 55
Explorations | Fall 2013
“The morphing technology integrated in the Pop-Op has applications beyond a purely architectural or artistic function.”
greater innovations in morphing surfaces. In particular, this iteration will feature electronic sensors to create an installation that responds to its viewers. The Pop-Op’s significance, however, extends beyond Texas A&M University. The ability to create surfaces that can sustain various shapes allows modern designers and architects to increase the functionality and aesthetics of their work. Also, the Pop-Op’s integration of an automated control system continues the push for autonomy in architectural design. Finally, the innovative collaboration between engineering and architecture shows that such interdisciplinary efforts can produce results that push the boundaries of both art and technology. I hope that the Pop-Op will inspire continued efforts toward producing shape-memory–based surfaces and structures that can morph in response to environmental stimuli for improved performance. ACKOWLEDGEMENTS I acknowledge the work of all students and faculty involved in the design and fabrication of the Pop-Op. The result was truly a result of teamwork, with many students and faculty playing pivotal roles. I thank Dr. Hartl for the opportunity to be involved in the project and for his invaluable support and advice over the last few semesters. Finally, I thank Prof. Esquivel for important details and feedback about the PopOp fabrication process.
REFERENCES 1. Calkins FT, Mabe JH. Shape memory alloy based morphing aerostructures. Journal of Mechanical Design 2010;132:111012. doi:10.1115/1.4001119. 2. Hernandez E, Hu S, Kung H, et al. Towards building smart self-folding structures. Shape Modeling International (SMI) Conference 2013;37:730–742. 3. Lapelletrie F. Life of Frank Joseph Malina. Proceedings of Point-Line-Universe. A retrospective exhibition of Frank Joseph Malina, Enter3, Prague, November 2007, pp. 16–28, 2007. 4. Sherbini K, Krawczyk R. Overview of intelligent architecture. In 1st ASCAAD International Conference, e-Design in Architecture, Dhahran, Saudi Arabia. December 2004, 2004. 5. Hartl D, Lagoudas DC. Aerospace applications of shape memory alloys. Proceedings of the Institution of Mechanical Engineers, Part G. Journal of Aerospace Engineering 2007;221:535–552. 6. Behnaz F. Alloplastic architecture. Master’s thesis, University of Southern California, Los Angeles, 2012.
Fall 2013 | Explorations
About the Board
Left to Right: Bobbie Roth, Aaron Griffin, William Linz, Callie Cheatham, Matthew McMahon, Annabelle Aymond, Madeline Matthews, MaryBeth Benda
Annabelle is a senior Telecommunication and Media Studies major and Japanese Language minor from Houston, Texas. She loves graphic design and photography, but has passion for media law and web design. Annabelle hopes to continue design after graduation and attend graduate school for media law.
Callie is a junior Chemistry major from Lake Jackson, Texas. She is a proud member of Squadron 20 of the Texas A&M Corps of Cadets and intends on commissioning into the Air Force delaying Active Duty to go to medical school with the ultimate goal of becoming a flight surgeon.
Samir is a senior Chemical Engineering major with a distinction in Engineering Honors. He has been a summer intern for the Chevron Corporation and the Anadarko Petroleum Company developing applications which helped improve global oil exploration techniques. As an Undergraduate Research Scholar Samir helped develop the Muon Trigger foran experiment at the Large Hadron Collider with Dr. Alexei Safonov.
MaryBeth is a junior Biomedical Sciences and Entomology double major from Arlington, Texas. She loves animals of all types and plans to attend veterinary school after graduation to become a wildlife veterinarian. MaryBeth hopes to someday perform research related to wildlife diseases and conservation.
Aaron is a sophomore Biochemistry, Genetics, and Mathematics major from Missouri City, Texas. He spends his time performing biochemical research in the Gohil Lab studying mitochondrial disease, and after graduation, Aaron plans to attend medical school through an MD/PhD program and pursue a career as a physician scientist.
William is a sophomore Mathematics major and German minor from Temple, Texas. He is currently pursuing research in Combinatorics. Upon graduation, William would like to attend graduate school in Mathematics. He also appreciates the experiences Explorations has provided in the research community.
Senior Editor / Designer
Explorations | Fall 2013
Madeline is a senior Psychology major double minoring in Economics and Neuroscience from Boerne, Texas. She loves Texas A&M University and believes that every student should participate in the research process whether by consuming or producing. She is currently involved in a neuroscience laboratory on campus. After graduation, she intends on applying to law school.
Bobbie is a senior Environmental Geoscience major from Houston, Texas. Her main focus is water quality and water management. She hopes to gain work experience and further her education by balancing a career and graduate school shortly after graduating.
Matt is a junior Geology major from Chana, Illinois. He plans to focus on environmental or materials science research in his future career on an international scale. His most rewarding experiences while at A&M have been the research projects he has conducted in marine biology and mineralogy. Such experiences have gotten him involved with the Academy for Future International Leaders and the L.T. Jordan Institute on campus.
Hilary Porter is a senior International Arts & Culture and Anthropology major with a minor in Italian studies from Fort Worth, Tx. She spent this past summer studying in Rome, Italy. While there she was able to immerse herself in a different culture and study some of our worlds greatest works and monuments. She plans to attend graduate school next year to further her career goal of becoming a museum curator.
To Join Explorations Interested in joining the Explorations board to assist in the creation of Volume 6? If you are a freshman or sophomore, you are welcome to apply for consideration as an editorial board member, which is our first year member program that will teach editors the basics on what goes into creating a yearly journal such as Explorations. Exemplary members of the editorial board will be invited to join the executive board after their first year, or members can remain a part of the editorial board if they wish. If you are an upperclassman, you are welcome to apply for an executive board position; be aware, however, that first priority is given to editorial board members in filling executive board positions. Applications for the editorial board, executive board, and layout and design team are made available at the beginning of each fall semester. For more information, visit explorations.tamu.edu. For up-to-date activities and deadlines, check us out on Facebook and Twitter!
Fall 2013 | Explorations
About the Authors Rosa Bañuelos
Rosa Bañuelos is a senior Biomedical Science major from Waller, Texas. Bañuelos plans on attending veterinary school and hopes to someday work at a zoo. For Bañuelos art is a byproduct of seeing and experiencing the world. She paints to turn her experiences into something everyone else is able to see.
Sara Carney is a senior Biomedical Science and Wildlife and Fisheries double major from La Porte, Texas. Following her graduation, Carney plans on continuing her education here at Texas A&M in the Science and Technology Journalism Master’s program. Her passion for wildlife is shown through her research on white tigers, and she hopes to share her passion for wildlife conservation with others through education and outreach programs.
Andrew DeCheck is a senior Petroleum Engineering major from Racine, Wisconsin. DeCheck created a middle school lab experiment focused on seismology and how it can be applied to the oil and gas industry. Much of his motivation for his article explaining the experiment arises from wanting to ease negative perceptions about the oil and gas industry through education. He plans to pursue an MBA and work as a Petroleum Engineer.
Thomas Colvin is a 2013 graduate from Houston, Texas with a Bachelors in History. Thomas spent more than two years doing Native American studies and is fascinated with questions about the history and origins of the first Americans. Thomas will be attending the University of Pennsylvania Law School in the fall of 2013.
Vivek Karun is a senior majoring in Biomedical Sciences from Katy, Texas. Karun plans on attending medical school to become a physician and wants to continue doing research. His interest in researching the causes of seizures comes from a keen interest in neurology and is inspired by the work of his mentor, Dr. Abbott, who studies epileptic mice.
Moiz Bohra and Asma Sadia
Moiz Bohra (right) is a junior Chemical Engineer from Mumbai, India. He plans to obtain a doctorate in Chemical Engineering with a focus in renewable and alternative energy. Asma Sadia (left) is a senior Chemical Engineering major from Doha, Qatar and intends on obtaining a Masters of Science in Chemical Engineering to pursue a career in academia. 59 Explorations | Fall 2013
Mariah Lord is a 2013 graduate from Lafayette, Colorado with a Bachelors degree in Political Science. Lord hopes to get her MBA and work in the energy and technology field focusing on a renewable and sustainable future. The inspiration for her research comes from her concern about future energy sources for the world and how our current energy use will impact the upcoming generations.
Sara Muldoon is a senior Biomedical Engineering major from San Antonio, Texas. Muldoon plans to work for a medical device company and focus on bringing more medical devices to market while simultaneously maintaining photography as a personal hobby or business venture. Muldoon explains her passion for photography as arising from the desire to capture moments in time she does not want to miss.
Anna Pennacchi is a senior Biomedical Science major from Houston, Texas. Pennacchi plans to attend veterinary school following her graduation from Texas A&M and specialize in Aquatic Animal Medicine and Research. Pennacchi’s passion for aquatic life inspired her to pursue her research into marine mammal science, with a current interest in the dolphins of Galveston.
Justin Montgomery is a 2013 graduate with a Bachelors degree in Mechanical Engineering and a minor in Philosophy. Montgomery now attends MIT and is working toward a Masters in their Technology and Policy Program. Montgomery’s inspiration for studying the philosophy of design comes from Richard Buchanan’s article “Wicked Problems in Design Thinking.”
Stephen O’Shea is a senior English-Creative Writing major from College Station, Texas. O’Shea plans to study, travel, and write, as well as earn a graduate degree in Creative Writing after a year or two abroad. O’Shea’s inspiration for his short story, which he will develop further following his departure from Texas A&M, came from his work with in the Glasscock Center’s Summer Research Program and his interactions with combat veterans from the Iraq and Afghanistan wars.
Megan Poorman is a senior Biomedical Engineering major from Plano, Texas. Poorman plans to obtain a Ph.D. in Electrical or Biomedical Engineering in the hopes of aiding the development of new technologies that improve the quality of patient care, diagnosis, and treatment.
Fall 2013 | Explorations
Phillip (Dane) Warren
Lauren Puckett is a 2013 graduate from Lake Jackson, Texas with a Bachelors degree in Biochemistry. Puckett plans to attend pharmacy school following graduation and pursue pharmaceutical research. Puckett’s inspiration to pursue research on current issues in the medical field stem from her interaction with her research advisor, Dr.Brian Shaw.
Phillip Dane Warren is a junior Economics major from Austin, Texas. Warren plans to attend law school in preparation for a career in environmental law following his graduation from Texas A&M. Warren’s passion for environmental policy and international relations led him to compose his research project on environmental law.
Matt Wiese is a senior Petroleum Engineering major from Houston, Texas. Following graduation, Wiese plans to enter the petroleum engineering field, pursuing his professional engineering certification and a position as a reservoir engineer. Wiese’s inspiration for his research project stems both from his personal interests in the field of petroleum engineering and his desire to disseminate misconceptions and stigmas concerning petroleum companies and their activities.
William (Daniel) Whitten
Jason Szafron is a junior Biomedical Engineering major from Chicago, Illinois. Szafron plans to pursue a Ph.D. in the field of biomedical engineering and work in a national lab developing biomedical instruments and devices. Szafron’s interest in biomedical devices and scientific research inspired him to pursue his project on aneurysm treatment.
William Daniel Whitten is a junior Mechanical Engineering major from Tulsa, Oklahoma. Following graduation, Whitten plans to pursue a graduate degree in mechanical engineering. Whitten was inspired to join this multidisciplinary project on kinetic art because of his interest in applying an engineer’s approach to an artist’s quest to create a piece of art capable of controlled movement.
Peter Wong is a sophomore Mechanical Engineering major from Corvallis, Oregon. Wong plans to finish his undergraduate education and pursue a job in the engineering field while continuing to compose and play music as a personal hobby, possibly selling his pieces as well. Wong completed his musical composition for a class assignment in Environmental Design 101.
Explorations | Fall 2013
Guidelines for Submissions Who can submit a proposal Any undergraduate student currently enrolled at Texas A&M University who is actively pursuing research, creative, or scholarly work or has done so in the past can submit a proposal. All submissions must be sponsored or endorsed by a faculty member at Texas A&M University. Explorations publishes student research and scholarly work from all disciplines. Format for proposals
Format for creative works
When submitting your proposal for consideration, please include the following: • • • • • • •
Name Email address Phone number Department Classification Area of research Name and contact information of your faculty advisor/ mentor • Title of the proposed project • Your contribution or role in the research • An abstract of no more than 250 words. The proposal should provide an overview of the project’s objectives and methods. It should also include a description of the project’s importance to the student’s field of study and to others outside the field. Note: Because Explorations is a multi-disciplinary journal targeting a general audience, please use non-technical language in your proposal. Necessary technical words must be defined.
• Only one submission per student • All creative work requires a faculty endorsement. A faculty member in the field of your work must approve your piece for publication in a serious scholarly journal. If you have difficulty locating a faculty member to review your work, Explorations may be able to provie suggestions. • All genres of creative work are welcome, however, due to the requirement for faculty endorsement, please remember that your submission should relate to creative work currently being taught at the university. • Your work must be accompanied by a descriptive sidebar of 500-700 words. The sidebar must include: - Why did you choose this topic? - Who are your creative influences? - How does this style or medium help you to communicate your idea? - What studies were done to develop your piece? How did they contribute to its persuasiveness, depth, vision or styling? • Please limit prose and poetry submissions to 3500 words. This word limit includes your scholarly sidebar, a minimum of 500 words.
Deadline for submissions is to be announceed, however, submissions are typically welcome at the end of the fall semester. Please visit explorations.tamu.edu for more information.
To Support Explorations Explorations would not be possible without the generous sponsorship of our contributors and the Association of Former Students. They directly facilitate our ability to showcase the best of undergraduate Aggie’s scholarly accomplishments. With their support, Explorations will continue to present these outstanding achievements to Texas A&M, the state of Texas and beyond. Explorations continues to highlight scholarly work produced by Texas A&M undergraduate students through generous gifts provided by our readership. To support Explorations, please make a check to Texas A&M University, noting that you would like to contribute to Explorations, and send to the Office of Honors and Undergraduate Research at: Honors and Undergraduate Research 114 Henderson Hall 4233 TAMU College Station, Texas 77843-4233 Fall 2013 | Explorations
explorations The Texas A&M Undergraduate Journal
email@example.com explorations.tamu.edu facebook.com/explorationstexasam twitter.com/ExplorationsUGR soundcloud.com/explorationstexasam 114 Henderson Hall 4233 TAMU College Station, TX 77843 USA Copyright 2013 Texas A&M University All Rights Reserved
Volume 5 Fall 2013