PennScience Journal Spring 2011

Page 1


Contents

▪5 ▪Features News Briefs

Arthur Argall & Isabel Fan

7

Nanotech at Penn

Glen Brixley, Sanders Chang & Susan Sheng

and Applications of Carbon 12 Properties Nanotubes Brian Laidlaw

14 Nanomedicine - Medicine of the Future? Interviews 16 Dr. Haim Bau

▪▪

Sally Chu & Jiayi (Jason) Fan

Department of Mechanical Engineering and Applied Mechanics

18 Dr. Karen Winey

Department of Materials Science and Engineering

Articles ▪20Research ▪

An Examination on the Controls of Wind Magnitude and Orientation on Dune Migration Claire Masteller

Precursor Cells Enriched for GABAergic 25 Neural Neurons and Their Effects in Peripheral Nerve Injured Rats

Ricardo Solorzano

of ZigBee PRO Mesh Networks 29 Performance with Moving Nodes Varun Sampath

from Plant Fibers: Coculturing of 34 Biofuel Liginolytic, Cellulolytic, and Fermenting Organisms

Vinayak Kumar


About PennScience PennScience is a peer reviewed journal of undergraduate research published by the Science and Technology Wing at the University of Pennsylvania. PennScience is an undergraduate journal that is advised by a board of faculty members. PennScience presents relevant science features, interviews, and research articles from many disciplines including biologial sciences, chemistry, physics, mathematics, geological sciences, and computer sciences. PennScience is a SAC funded organization. For additional information about the journal including submission guidelines, visit http://www.pennscience.org.

Journal Staff EXECUTIVE BOARD

GENERAL STAFF

Editors-In-Chief Vishesh Agrawal Arthur Argall

Assistant Editing Managers Varun Patel Emily Xue Kevin Zhang

Editing Managers Brian Laidlaw Nikhil Shankar Layout Managers Isabel Fan Hijoo Karen Kim Publicity Manager Steven Chen Writing Managers Isabel Fan Brian Laidlaw Faculty Advisors Dr. M. Krimo Bokreta Dr. Jorge Santiago-Aviles Cover Design Weiren Liu

Assistant Layout Managers Steven Chen Writing Paul Blazek Glen Brixley Sanders Chang Sally Chu Jiayi (Jason) Fan Vinayak Kumar Susan Sheng Kevin Zhang Editing Peter Bittar Paul Blazek Glen Brixley Sally Chu Jiayi (Jason) Fan Jake Gissinger Xiao Li Weiren Liu Samkit Mehta Kathryin Rooney Catherine Wang


Letter from the Editors Dear Readers, We are proud to introduce you to the second issue of the 9th volume of PennScience. The theme of this issue, nanotechnology, was inspired by the exciting developments in nanotechnology occurring at the University of Pennsylvania. Penn’s nanotechnology resources, including the Nano/Bio Interface Center and the planned Krishna P. Singh Center for Nanotechnology, make Penn one of the foremost centers for nanotechnology research. These resources bring together researchers from the School of Arts and Sciences, the School of Engineering and Applied Science, and the School of Medicine to tackle critical problems in drug delivery, DNA sequencing, and alternative energy using the tools of nanotechnology. But what is nanotechnology? PennScience examined this exciting field from a range of perspectives. We present interviews with two Penn faculty members, Professors Haim Bau and Karen Winey, who are doing groundbreaking research in nanomotors and nanomaterials. Additionally, our Writing committee has written a series of compelling articles on the issue. Glen Brixley, Sanders Chang, and Susan Sheng give an overview of nanotechnology research at Penn, highlighting developments in molecular motion, biomolecular optoelectronic function, and molecular probes. Brian Laidlaw describes the novel applications and science behind carbon nanotubes, while Sally Chu and Jason Fan detail the possibilities of nanomedicine. This semester’s PennScience also publishes several terrific undergraduate research papers. Claire Masteller examined the relationship between wind magnitude and dune formation at White Sands National Monument. Ricardo Solorzano studied neuropathic pain by injecting neural precursor cells into the rat spinal cord to develop a novel pain therapy. Varun Sampath and Chester Hamilton explored the application of ZigBee PRO Mesh Networks in wireless communication. Lastly, Vinayak Kumar developed a new coculturing process using plant cells to synthesize biofuels. We have enjoyed our time as co-Editors-in-Chief and we are pleased to introduce Isabel Fan as the new Editor-in-Chief of PennScience. As we leave, we would like to thank the groups and individuals that have made our work at PennScience possible. First, we would like to thank our staff for their dedication and enthusiasm for the journal. We owe our funding to the Student Activities Council and the Science and Technology Wing, without which we could not publish a high-quality journal. We would also like to thank our faculty advisors for their constant support and insight. Finally, we would like to thank the Penn faculty that took the time to meet with us to discuss their research. Thank you for reading PennScience, and we hope you enjoy our latest issue! Sincerely, Arthur Argall and Vishesh Agrawal Co-Editors-in-Chief


RESEARCH ARTICLES

News Briefs Arthur Argall, Isabel Fan

▪▪Facilities

New Penn Nanotechnology Center: Penn President Amy Gutmann joined the deans of the School of Arts and Sciences and School of Engineering and Applied Sciences in a groundbreaking ceremony for the new Krishna P. Singh Center for Nanotechnology earlier this semester. The Center was named for Krishna Singh, a Penn alumnus and Trustee, who provided a $20 million gift to establish the new nanotechnology hub. The new facility will house microscopy laboratories, optics labs and 10,000 square feet of clean rooms, and serve as a regional center for interdisciplinary research and innovation. Source:

http://www.upenn.edu/pennnews/current/node/4177

▪▪Research

Cellular Forces Measured: Christopher Chen, the Skirkanich Professor of Innovation in Bioengineering, has created a method for scientists to measure cell forces as they move in three-dimensional environments. Previously, it had been only possible to measure cell forces in one and two dimensions. Professor Chen was able to achieve this three-dimensional feat by surrounding cells in synthetic hydrogels and monitoring the positions of fluorescent heads around the cells. Three-dimensional modeling of cell forces has a significant contribution to how scientists research cell movements including those involved in tissue formation and cancer metastasis. Source: http://www.upenn.edu/pennnews/news/new-technique-created-penn-allows-researchers-study-cell-forces-3-d

Language Complexity: A study conducted by the Department of Psychology fellow Gary Lupyan and a colleague at the University of Memphis has concluded that languages spoken by more people and in a large geographical area tend to have simpler grammar. Lupyan and his colleague determined the relationship between scale and language by assessing the “relationship between who and by whom a language is spoken and its complexity”. The researchers observed that the more people utilizing the language, the less complex its pronoun and number systems were. Source: http://www.upenn.edu/pennnews/current/research/030410. htm

SPRING 2011

5


Features RESEARCH ARTICLES

New Dinosaur Species Discovered: Department of Earth and Environmental Science graduate student Andrew McDonald has identified a new dinosaur species. The Jeyawati rugoculus is a ninety-one million year old herbivore related to the duck-billed hadrosaurs. The bones of this species were discovered in New Mexico at a site paleontologists had been working at since 1996. McDonald began studying the bones in 2006 as an undergraduate at the University of Nebraska. While comparing the remains to the bones of other dinosaur species, McDonald realized the specimen had unique features. He completed this project with Professor Peter Dodson of Penn’s School of Veterinary Medicine and School of Arts and Sciences. Source: http://www.upenn.edu/pennnews/current/research/061010.html

Boltzmann Equation Solved: Department of Mathematics’ Philip T. Gressman and Robert M. Strain have solved the Boltzmann equation in a project funded by the National Science Foundation. James Clerk Maxwell and Ludwig Boltzmann created this sevendimensional equation in the 1860’s and 1870’s, using it to describe how gas molecules are distributed in space and how they respond to physical factors like temperature, pressure, and velocity. Gressman and Strain utilized partial differential equations and harmonic analysis to prove the presence of classical solutions and rapid time decay to equilibrium. In other words, they have shown that the Boltzmann equation describes that small disturbances are short-lived for the gases and molecules can quickly return to equilibrium. Source: http://www.upenn.edu/pennnews/news/university-pennsylvania-mathematicians-solve-140-year-old-boltzmann-equationgaseous-behaviors

▪▪Newsmakers

Nobel Prize for Penn Alum: Penn Alum Ei-ichi Negishi was one of three scientists awarded the 2010 Nobel Prize in Chemistry, receiving his award for research on palladium-catalyzed cross couplings in organic synthesis. Negishi earned his bachelor’s degree from the University of Tokyo in 1958 and his Ph.D. at the University of Pennsylvania in chemistry in 1963. His research is largely in the field of organic chemistry and he has made numerous discoveries including the discovery of Pd- or Nd- catalyzed cross-coupling reaction of organometals containing Zn, Al, and Zr, known as Negishi Coupling. He has published 375 papers which have received nearly six thousand citations. Negishi currently serves as the Herbert C. Brown Distinguished Professor of Chemsitry at Purdue University. Source: http://www.upenn.edu/pennnews/current/latestnews/100610.html http://www.chem.purdue.edu/negishi/

6

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH


RESEARCH ARTICLES

Nanotech @

Penn

Glen Brixley, Sanders Chang and Susan Sheng

The Nano/Bio Interface Center Penn prides itself on integrating different disciplines to tackle modern problems more effectively. The Nano/Bio Interface Center (NBIC) is one of many research institutes at Penn, bringing together faculty from the School of Engineering and Applied Sciences, School of Arts and Sciences, School of Medicine, Graduate School of Education, Wharton School, Netter Center for Community Partnerships, and even professors at Drexel University. Together, these departments work to bring new directions to problems in the life sciences and in engineering, and to use the combined expertise to fully realize the potential of nano-biotechnology. The NBIC divides its researchers into two main groups: the Biomolecular Optoelectronic Function group, and the Molecular Motions group. A third group, the Single Molecular Probes group, works to develop tools which can be utilized by the other two research groups such as methods to study single molecules. The research being conducted by these groups has far reaching applications ranging from the development of nanoelectronics to the creation of new medical diagnostic devices. In addition to being a leader in nanotechnology research, the NBIC also plays important roles in education and community outreach. Both undergraduate and graduate students have various options for incorporating nanotechnology-oriented classes into their studies. Additionally, there are research grants available for conducting nanotechnologyrelated research. Undergraduates can apply for an REU (Research Experience for Undergraduates) grant for summer research in nano-biotechnology, and graduate students can apply for the Integrative Graduate Education and Research Traineeship Fellowship . Furthermore there are resources available for high school science teachers, and middle school and high school students who are keen to get a head start in learning about nanotechnology. Finally, NBIC hosts “NanoDay @ Penn,” a free event featuring exhibits, demonstrations and laboratory tours.

Molecular Motions: The Molecular Motions group, led by faculty spanning six different departments and three schools, develops and uses techniques which “manipulate molecules and materials at the nanometer scale” (1) to study and understand the behaviors of molecular motors and biological movement within cells. The group tries to understand and characterize DNA and protein folding, as well as the motion of motor proteins along the cytoskeleton, with the goal of potentially modeling synthetic systems after biological ones. They also seek to understand the interactions between molecules and their surrounding environment, and the effects on its behavior and properties. Recent projects include: • Development of a micro-fabricated chip (“nanoaquarium”) which can be used with transmission (TEM) and scanning transmission (STEM) electron microscopes to directly observe processes in liquid media. The chips have already been used to observe the thermal motion and particle self-assembly of gold particles in aqueous solutions. (Joseph M. Grogan and Haim H. Bau) • Studying the properties that affect the adhesion of biomolecules to soft nanostructures. Surface charge, substrate modulus, and nanodomain size were found to be factors that influenced attachment. With increased understanding of the attachment of biomolecules, it is hoped that molecular motor driven separation, purification and detection devices can be developed, as well as biomolecular micro-electrical, mechanical systems. (Jay Park, Matthew Caparizzo, Yujie Sun, Yale E. Goldman, and Russell J. Composto) • Use of silver nano-structures to enhance the fluorescent signal on individual ribosomes during protein synthesis. (Shashank Bharill, Chunlai Chen, Ben SteSPRING 2011

7


Features RESEARCH ARTICLES vens, Karol Gryczynski, Ignacy Gryczynski and Yale Goldman) References 1. “Molecular Motions.” Nano/Bio Interface Center. Nano/Bio Interface CEnter, 2010. Web. 13 Nov 2010. <http://www.nanotech.upenn. edu/researchMM.html> 2. Nano/Bio Interface Center, 2010. Web. 13 Nov 2010. <http://www. nanotech.upenn.edu/index.html>

Biomolecular Optoelectronic Function The Biomolecular Optoelectronic Function group studies the mechanisms by which certain molecules operate in biological processes, and create devices that employ these mechanisms for a variety of applications. Currently, researchers are working to incorporate organic molecules in nanoscale circuitry and optical systems such as solar cells. Research groups are also creating nanoscale devices that can work as chemical sensors or catalysts for chemical reactions. One specific area of study in this group is the use of graphene nanopores in DNA sequencing. Among other things, accurate genome sequencing would allow physicians to determine if a person is at risk for developing certain diseases, and to determine how that person would respond to various medications. In light of these benefits, the National Institutes of Health has been funding research to reduce the cost of genome sequencing from the present level of about $10,000 to below $1000 (1). A group at Penn led by Dr. Marija Drndić is exploring the possibility that graphene nanopores can be used to sequence DNA more efficiently than present methods. In their most recent experiment, the Drndić group created a nanopore in a graphene membrane using the electron beam of an electron microscope. The membrane was placed

FIGURE: Researchers in the Biomolecular Optoelectronic Function group convert light to electrical energy via nanostructures made of gold particles linked to porphyrin molecules. Courtesy of NBIC. 8

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

between two reservoirs of electrolyte solution. A potential difference was then applied to the solution, causing electrolyte ions to flow through the nanopore. Strands of DNA placed in the solution were caught by the flow and began to pass through the pore. The large DNA molecules partially blocked the flow of ions, creating a drop in the measurement of electrical current through the solution. This experiment proved the concept that the passage of DNA through a nanopore can be detected and quantitatively measured. The group’s future experiments will determine if the base pairs in a DNA molecule can be distinguished individually as the molecule passes through a nanopore. Their most promising option is to measure the electric potential across the nanopore itself using electrodes attached to the graphene membrane. Each base pair has a different resistance, and should result in a different potential measurement as it passes through the pore. The sequence of potential values would reveal the order of base pairs in the DNA molecule (2, 3). References 1. Herper M. 2010 January 12. Illumina’s cheap new gene machine. Forbes [Internet]. [cited 2010 Nov 14]. Available from: <http://www. forbes.com/2010/01/12/genome-illumina-sequencing-businesshealthcare-cancer-autism.html> 2. Merchant CA, Healy K, Wanunu M, Ray V, Peterman N, Bartel J, Fischbein MD, Venta K, Luo Z, Johnson ATC, and Drndić M. 2010. DNA translocation through graphene nanopores. Nano Lett; 10(8):29152921. 3. Dr. Christopher A. Merchant. 2010 Nov 11. Personal Interview. Image credit: Robert Johnson, Temple University (left) Merchant et al. (right)

Single Molecular Probes Having the tools to analyze behavior at the molecular level is crucial for research in nanotechnology. The Single Molecular Probes approach aims to develop more effective tools to probe molecular behavior for general use in nanotechnology research (1). This scope of research focuses on integrating knowledge from the other two themes, optoelectronics and the assessment of molecular motions. Recent progress in this field includes the development of new techniques for scanning probe microscopies (SPM), a general type of microscopy in which physical probes are used to scan a specimen. This process does away with the use of light and electrons, producing images that more closely resemble the physical nature of surfaces at the molecular level (2). One powerful SPM technique that is currently being worked on is atomic force microscopy (AFM). Current methods for AFM place the probe tips in direct contact with the surface of the sample (3). Attractive and repulsive atomic forces will position the tips at different distances from the sample. These distances can then be determined by deflecting laser beams off these probe tips and measuring the final


RESEARCH ARTICLES

What is Nanotech? The origins of nanotechnology occurred through the production of steel, paint, and rubber that form microscopic coalitions within their structures. The first purposeful work with particles on the nanoscale was performed by Richard Zsigmondy in 1914 with his ultramicroscopic studies of colloid solutions, in which he noticed that the solutions were heterogeneous with small particles suspended in the mixture. In his studies he defined the nanometer as 10-9 meters in order to describe the size of the particles he was studying. Nanotechnology still remained a largely undeveloped field when one of its biggest proponents, the renowned physicist Richard Feynman, stepped into the field. In a famous lecture entitled “There’s Plenty of Room at the Bottom” in 1959, he considered the future possibilities of manipulating individual atoms as a more direct control on chemical synthesis and discussed possible methods to achieve this scaling-down of technology and techniques (1). He ended with two challenges: to build a nanomotor and to scale down print to 1/25,000 of its size so as to be able print the Encyclopedia Britannica on the tip of a pin. The nanomotor was produced rather quickly a year later, and the second achieved later in 1985. While academia was looking to continue to downsize research, technology corporations were also pushing the envelope in nanotechnology. Gordon Moore, co-founder of Intel, wrote in 1965 what came to be known as Moore’s law: that as technology progressed, the number of silicon transistors that could fit within an area on a chip would double every two years, which has held true to this day with transistors now 45-65nm large (2). However, the term “nanotechnology” itself finds its origins in a

angles and positions of these beams. From these nanoscale distances, a high-resolution 3D image of the sample’s surface can be produced. The effectiveness of the AFM techniques highly depends on the nature and characteristics of the probe tips used. The tip must be the right size and shape in order to engage in optimal atomic interactions with the surface of the sample. Yet it must be made sturdy enough to resist deformation from the external environment. A research group led by faculty member Robert Carpick has researched the capability of using nanocrystalline diamond, or NCD, for making AFM probe tips (4). Similarly, the research group of Dr. Yamakoshi has looked into chemically functionalizing the NCDAFM tips with different chemical compounds for possible applications in force spectroscopy, a technique for analyzing the mechanical properties of proteins using tension forces (5). Continuous research in developing molecular probes and microscopy techniques is essential for meeting the demands of breakthrough research in nanotechnology. Furthermore, there is potential for using these tools for more general purposes, such as assessing the impact of nanotechnologies on

1974 paper “On the Basic Concept of Nano-Technology” by Norio Taniguchi which stated, “‘Nano-technology’ mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or one molecule” (3). Nanotechnology took off in the 1980s. The discoveries of fullerenes, carbon nanotubes, and quantum dots created entire new subfields. In 1987 the first protein was engineered; in 1988 the first course on nanotechnology was offered at Stanford University; 1989 saw the first U.S. national conference on nanotechnology, and 1990 brought the first academic journal focused on research in this rapidly expanding field. Since then nanotechnology has captured the minds of the scientific and the popular world, bringing exciting discoveries and the introduction of the field to mainstream news outlets, congressional legislation, and popular culture. Nanotechnology today encompasses many fields, including carbon nanotubes, nanomaterials, nanomedicine, molecular self-assembly, nanoelectronics, robots, quantum computing, consumer goods, and industrial products of all kinds.

-Kevin Zhang References 1. Feynman R. There’s Plenty of Room at the Bottom. In: American Physical Society; 1959; California Institute of Technology; 1959. 2. Moore GE. Cramming more components onto integrated circuits. Electronics 1965;38. 3. Taniguchi N. On the Basic Concept of Nano-Technology. Proc ICPE Tokyo 1974;2:18-23.

the environment and on public health (1). References 1. Bonnell, D. “Single Molecule Probes.” Nano/Bio Interface Center. Nano/Bio Interface Center, 2010. Web. 6 Nov 2010. <http://www. nanotech.upenn.edu/researchSMP.html> 2. “Overview of Scanning Probe Microscopy Techniques.”Nanoscience Instruments. Nanoscience Instruments, Inc., 2010. Web. 21 Nov 2010. <http://www.nanoscience.com/education/tech-overview.html>. 3. “Atomic Force Microscopy.” Nanoscience Instruments. Nanoscience Instruments, Inc., 2010. Web. 21 Nov 2010. <http://www. nanoscience.com/education/AFM.html>. 4. Carpick, Robert. “Diamond Probes for Multifunctional AFM.” Nano/Bio Interface Center. Nano/Bio Interface Center, 2010. Web. 6 Nov 2010. <hthttp://www.nanotech.upenn.edu/nuggets/0081. html> 5. Yamakoshi, Yoko, Michael E. Drew, Benjamin Delamare, Robert Carpick, and Russ Composto. “Functionalized Nanocrystalline Diamond-Coated AFM Tips.”Nano/Bio Interface Center. Nano/Bio Interface Center, 2010. Web. 6 Nov 2010. <http://www.nanotech. upenn.edu/nuggets/0101.html>.

SPRING 2011

9


Features RESEARCH ARTICLES Nanotechnology Education at Penn Since the foundation of the Nano-Bio Interface Center in 2004, the University of Pennsylvania has been one of the leading universities in nanotechnology education and research. According to Small Times, a web-based nanotechnology journal, the nanotech program at Penn is the best in the nation in terms of research opportunities, and the fifth best overall (1). Currently, 25 undergraduates are pursuing the nanotechnology minor offered through SEAS, and 15 Ph.D. students are pursuing the graduate certificate in nanotechnology. Penn has also recently started a master’s degree program in nanotechnology, in which 7 people are enrolled. This degree is one of the first of its kind in the country (2). The programs are very flexible, with many courses to choose from, allowing students to concentrate study in any area of engineering.

Penn Nanotech Society Penn Nanotech Society is a student-run organization that aims to expose students to various themes in nanotechnolo-

gy. The organization addresses topics ranging from academic research to the business aspects of a career in nanotechnology. Events are held throughout the year, including talks from business leaders as well as “Nanochats”, member-led discussions on recent research and developments in nanotechnology (3).

Krishna P. Singh Center for Nanotechnology Under the PennConnects Project to enhance and expand Penn’s campus, the Krishna P. Singh Center for Nanotechnology is a new building for interdisciplinary molecular research encompassing various fields in medicine and engineering (4). Funds to construct the building were donated by Penn alumnus Krishna Singh, founder of Holtec International and member of the SEAS Board of Overseers (5). The building will be located by the intersection between 32th street and Walnut.

NanoDay NanoDay is an annual event sponsored by the Nano/Bio Interface Center (6). The event intends to increase student awareness on careers from many areas of technology and science. Attendees can take part in the Nanotech Fair, an interactive exhibition displaying recent projects from various NBIC research groups. In addition, students can attend panel discussions concerning careers and research in nanotechnology. The event also holds activities geared toward high school students, such tours around the different nanotechnology facilities at Penn and a science fair. References 1. Stuart C. 2006. Gateway to greatness. Small Times [Internet] [cited 2010 Nov 13]; 6(3).Available from: <http://www.electroiq. com/index/display/article-display/256495/articles/small-times/volume-6/issue-3/features/cover-story/gateway-to-greatness.html> 2. Bonnell D. 2010 Nov 08. Nanotechnology education. E-mail Message to Author. 3. “Penn Nanotech Society.” Penn Nanotech Society. Penn Nanotech Society, 2010. Web. 6 Nov 2010. <http://www.nanotechsociety. org/?title=Society>. 4. “Krishna P. Singh Nanotechnology Center.” Penn Connects - A Vision for the Future. University of Pennsylvania, 2010. Web. 21 Nov 2010. <http://www.pennconnects.upenn.edu/find_a_project/alphabetical/singh_nanotechnology_center_alpha/singh_nanotechnology_center_overview.php>. 5. “Alumnus Krishna Singh’s $20 Million to Engineering.” University of Pennsylvania - Almanac 4 Sep 2007: n. pag. Web. 31 Oct 2010. <http://www.upenn.edu/almanac/volumes/v54/n02/singh.html>.

FIGURE: Students present their research during Nanoday at the University of Pennsylvania. 10

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

6. “Events.” Nano/Bio Interface Center. Nano/Bio Interface Center, 2010. Web. 6 Nov 2010. <http://www.nanotech.upenn.edu/events. html>.


MCAT

®

TEST PREPARATION COURSES

Find out why students across the nation are turning to EXAMKRACKERS to improve their MCAT® scores. Our top-ranked program offers:

Examkrackers MCAT Comprehensive Review

Examkrackers MCAT EXTRA Review

• 9 weeks of Comprehensive MCAT® Review • 4 classes each week • 8 full-length simulated MCAT®s • Unique class format with each consisting of: • a 50-minute lecture • a 10-minute break • a 30-minute practice exam • a 25-minute review of the exam • Timed & proctored testing in the MCAT® format every class • A complete set of manuals covering everything tested on the MCAT® including thousands of questions

• 27 hours Verbal, Physics, and Chemistry EXTRA instruction

®

®

• Our Comprehensive MCAT® Course for Fall or Spring

Prepare with the best. Prepare with Examkrackers.

www.Examkrackers.com or call 1.888.KRACKEM

Look for us on:


Features RESEARCH ARTICLES

The Properties and Applications of Carbon Nanotubes Brian Laidlaw

In any discussion of technologies of the future it is inevitable that carbon nanotubes will eventually be introduced. With potential applications ranging from paper batteries to materials for space ships, carbon nanotubes have captivated the imagination of countless people since being brought to widespread attention in 1991 (1). However, what exactly is a carbon nanotube? And what properties do they have that make them well suited for such a wide range of technologies? Carbon nanotubes (CNTs) are a member of the fullerene structural family. Fullerenes along with diamond and graphite comprise the three different forms, or allotropes, in which carbon can be naturally found. CNTs have a cylindrical nanostructure and can be constructed with a lengthto-diameter ratio far larger than any known material (2). CNTs naturally align in “rope” like structures, and can be envisioned as atomic scale chicken wire comprised of carbon atoms and their bonds. CNTs entirely consist of sp2 chemical bonds, and these covalent bonds are responsible for CNTs unique strength (3). Sp2 bonds are of a higher energetic state than the sp3 bonds that make up diamonds, and thus require more energy to break. These properties give CNTs a strength to weight ratio over 300 times greater than that of steel (4). Nanotubes can also be constructed so that one tube nested within another can slide out with almost no friction, thereby forming a perfect bearing. It is this trait of nanotubes that has led to the construction of the world’s smallest rotational motor with this remarkable property of nanotubes opening the door for the construction of molecular motors (5). CNTs also exhibit novel electrical properties which are a result of the symmetry and electronic structure of their network of sp2 bonds. CNTs are superior to silicon in terms of carrier mobility, or ease with which electrons can move through a material. Furthermore, C N T s have a n

12

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

electrical current density three orders of magnitude greater than a typical metal meaning they can hold three times as much charge per unit area than other metals (6). This property coupled with their small diameter makes them ideal for the construction of high-performance, high-power, flexible electronics. Carbon nanotubes also are excellent thermal conductors with a thermal conductivity (the ability to conduct heat) almost a thousand times greater than copper which is known for being a good conductor (7). Considering all of the unique properties of carbon nanotubes it is no surprise that this material is being considered for application in many new technologies. Among the groups developing this technology is the military. Carbon nanotubes’ remarkable strength makes them ideal as candidates for next-generation body armor as they have been shown experimentally to have a tensile strength over 25 times that of Kevlar, the current body armor of choice (4). In 2007, the first step on the path towards carbon nanotube armor was taken by a group at Cambridge who announced the creation of a fiber composed of carbon nanotubes that can be woven into body armor. This technology has since been licensed to a company in order to produce body armor (8). In addition, the US Army Research Office has a 5 year, 50 million dollar grant with MIT to develop and exploit nanotechnology to dramatically improve the protection offered to soldiers. This grant led to the creation of the MIT Institute for Soldier Nanotechnologies (ISN) whose ultimate goal is to create a 21st century battlesuit that combines high-tech capabilities with light weight and comfort. Going beyond using the hardness of nanotubes to make a battlesuit of unmatched strength, this project is also utilizing other properties of nanotubes to create a suit that monitors health, eases injuries, communicates automatically, and reacts instantly to chemical and biological agents. Current battlesuits can weigh in excess of 140 pounds and still provide insufficient ballistic protection. The potential for miniaturization offered by nanotubes allows this weight problem to be circumvented and a truly integrated battlesuit to be created. There are currently five key strategic research areas and twenty-seven specific research projects at the ISN addressing the various problems associated with the creation of this battlesuit. While much work remains to be done before carbon nanotube based armor is common on the battlefield, the technological advances currently being made may soon bring this future within reach (9).


RESEARCH ARTICLES Another potential use of carbon nanotubes is the creation of what is known as a space elevator. A space elevator is a method of moving objects from a celestial body such as a planet into space without the use of rockets. Common variants of this elevator include a cable (or ribbon) that reaches from Earth to a point in geostationary orbit around the Earth. This would allow the elevator to behave similar to a satellite in that it would appear stationary with respect to a fixed point on the rotating Earth. This would allow the cable attaching the elevator to the Earth to maintain a fixed length. A space elevator would revolutionize the mechanism for carrying payloads into space at low cost. Once in space the payloads could ascend by mechanical means to any Earth orbit or be sent to other planetary bodies by using the cable as a sling. There are many challenges that must be overcome before this technology becomes feasible (10). The largest problem associated with the space elevator is the creation of the tether that reaches from the surface of the Earth into space. This tether must be extraordinarily strong and light making carbon nanotubes an ideal material for the construction of this cable. This cable would have to be well over 24,000 miles long and be able to endure enormous stress. Furthermore, this cable would have to be able to be manufactured in large quantities and be cost effective. Currently carbon nanotubes with a large enough tensile strength to make this cable have not been produced although this problem is likely to be overcome with improving technology. The NASA Institute for Advanced Concepts is currently exploring the idea of a Space Elevator with a 2005 report stating that the “lunar space elevator is feasible, and can be constructed of available materials to fit in the timeframe of the President’s Moon-Mars initiative” (11). NASA’s Centennial Challenges program in partnership with the Spaceward Foundation also runs the ‘Elevator: 2010’ competition in which prizes are given out for the development of technologies associated with the space elevator. Similar to the Ansari X prize which offered prizes for the development of reusable manned spacecrafts, this competition aims to spur the development of space elevator related technologies and has already led to the creation of a climber capable of ascending a cable at 3.9m/s over 1000 meters (12). These are just two of the potential applications of carbon nanotubes. It has been only 20 years since this technology has been brought to the widespread attention of the scientific community with tremendous advances being made within this time. Carbon nanotubes are the focus of a massive amount of current research and are widely recognized for their amazing properties. So while the technology might not currently exist to create nanotubes capable of fully realizing their potential, it is only a matter of time before the problems associated with nanotubes are overcome. Once this occurs the possible applications of this technology are almost limitless and will undoubtedly bring many technolo-

gies that currently only exist in science fiction firmly into the realm of reality. References 1) S. Iijima, Helical microtubules of graphite carbon, Nature 354 (1991), pp. 56–58. 2) Wang, X.; Li, Q.; Xie, J.; Jin, Z.; Wang, J.; Li, Y.; Jiang, K.; Fan, S. (2009). “Fabrication of Ultralong and Electrically Uniform SingleWalled Carbon Nanotubes on Clean Substrates”. Nano Letters 9 (9): 3137–3141. 3) D. E. Jiang, B. G. Sumpter, and S. Dai. Unique chemical reactivity of a graphene nanoribbon’s zigzag edge. J. Chem. Phys. 126, 134701 (2007) . 4) Yu, Min-Feng; Lourie, Oleg; Dyer, Mark J.; Moloni, Katerina; Kelly, Thomas F.; Ruoff, Rodney S. (28 January 2000). “Strength and Breaking Mechanism of Multiwalled Carbon Nanotubes Under Tensile Load”. Science 287 (5453): 637–640. 5) Fennimore AM, Yuzvinsky TD, Han WQ, Fuhrer MS, Cumings J, Zettl A. Rotational actuators based on carbon nanotubes. Nature. 2003 Jul 24;424(6947):408-10. 6) Javey, Ali; Guo, J; Wang, Q; Lundstrom, M; Dai, H (2003). “Ballistic Carbon Nanotube Transistors”. Nature 424 (6949): 654–657. 7) Pop, Eric et al.; Mann, David; Wang, Qian; Goodson, Kenneth; Dai, Hongjie (2005-12-22). “Thermal conductance of an individual single-wall carbon nanotube above room temperature”. Nano Letters 6 (1): 96–100. 8) Rincon, Paul (2007-10-23). “Science/Nature | Super-strong body armour in sight”. BBC News. http://news.bbc.co.uk/1/hi/sci/ tech/7038686.stm. 9) “MIT Institute For Soldier Nanotechnologies”. http://web.mit.edu/ isn/ 10) Pugno NM, Bosia F, Carpinteri A. Multiscale stochastic simulations for tensile testing of nanotube-based macroscopic cables. Small. 2008 Aug;4(8):1044-52. Review. 11) Jerome Pearson, Eugene Levin, John Oldson and Harry Wykes . Lunar Space Elevator for Cislunar Space Development. Phase I Technical Report. May 2, 2005. 12) Elevator:2010 - The Space Elevator Challenge <http://www. spaceward.org/elevator2010> 13) EVHS Robotics Research and Design <http://rd.evhsrobotics. com/09-0116/design.html>

SPRING 2011

13


Features RESEARCH ARTICLES

Nanomedicine - Medicine of the Future? Sally Chu and Jiayi (Jason) Fan

Nanomedicine is the medical application of nanotechnology, and utilizes nanoparticles (NPs) to treat or detect diseases (1). It is a relatively new but rapidly developing field with around 130 drug delivery methods already being researched worldwide (2). Nanomedicine primarily deals with combating, diagnosing, and imaging one of the most fatal diseases currently lacking effective treatments – cancer. Over 50 companies are developing such NPs, with 34 formed since 2006, and a dozen currently involved in clinical trials (3). The problems with available cancer treatments, particularly chemotherapy, are the drugs’ small bioavailability (the ability to concentrate their dosage at the cancerous tumor), toxicity, and risk of degradation by the body’s immune system before they reach the tumor (3). Nanoparticles can be carefully designed to overcome all three obstacles. The advantage of nanomedicine over traditional chemotherapy is that nanoparticles can be made piece by piece, with each separate part serving a specific purpose, and then brought together as a multi-functional drug. For example, a tumor-killing compound can be encased in soluble, sturdy yet biodegradable materials like polyethylene glycol (PEG), sugar, and phospholipids. A molecule that binds marker proteins signature of tumor cells can be attached to this NP shell so that it would exclusively interact with cancer cells. This way, the drug’s bioavailability is significantly increased while normal cells are safe (3). The tumor’s increased blood

FIGURE: Red blood cells (yellow) deliver nanoparticles (red) with the chemotherapy drug inside to tumor cells (blue). Nanoparticles protect normal cells from the toxic drug (3). 14

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

circulation and lack of lymphatic drainage cause the tumor to accumulate molecules abnormally and help to amplify dosage concentration (4). The same method can be applied to the newly-emerging RNA interference treatment of cancer. MicroRNA (miRNA) and short-interfering RNA (siRNA) are very effective at repressing or degrading mRNAs, such as growth-specific mRNAs, thereby causing tumors to stop dividing. RNA, however, is very fragile and can be broken down in the blood by RNAses if administered alone. However, NPs can be used to form a shell around the miRNA or siRNA. Thus, drugs previously too toxic or fragile can now be used (3,5). Another novel approach utilizes the optical property of nanoparticles themselves to eliminate tumor cells. Gold NPs or graphene sheets tagged with targeting markers can bind to cancer cells. By irradiating these materials with infrared light, which is harmless to the human body, these materials will overheat the tumor and kill it (6, 7). In addition, by borrowing modern computer chip-making techniques, almost any type of NP can be manufactured quickly and accurately. The geometry of NP shells can also be modified to fine-tune their duration in the bloodstream and absorbance. For example, stiffer molecules last shorter in the blood before they get cleared and cylindrical molecules absorb better than spherical ones (3). Nanomedicine’s other main application is in imaging and diagnosis. Nanomedicine allows for better detail and precision than traditional approaches, especially in tracing cancer cells. There are primarily three tracing methods: quantum dots (nanocrystals), nanowires, and microcantilevers. Quantum dots are tiny crystals, such as cadmium selenide (CdSe), that glow when stimulated by ultraviolet light. The color of the light changes depending on the size of the crystals. Latex bead probes designed to bind to specific DNA sequences, like those characteristic of cancer, can be filled with these crystals. The light-stimulated probes would emit unique bar codes based on the design, allowing scientists to create multiple labels and identify numerous regions of DNA simultaneously (8). Those bar codes also facilitate a comparison of tumor-specific DNAs so that mutations can be readily located, especially since their florescence is stable for long periods of time (9). This is important in cancer detection, which results from the accumulation of many different changes within a cell. Since


RESEARCH ARTICLES quantum dots improve image resolution and detect tumors earlier than MRIs and CAT scans, it would be easier to follow the progression of cancer. Nanowires can also be engineered to pick up cancer cell markers. Jim Heath at the California Institute of Technology designed stretches of DNA to find particular RNA structures to follow with these nanowires (10). Antibodies linked to the nanowires act as receptors for cancer marker proteins. A current is run through the nanowires. When the receptors come into contact with the proteins, a momentary change in conductance indicates the marker’s presence. These changes have a characteristic duration based on the receptor it binds. Scientists can then follow the tumor’s morphology as it responds to the cancer drugs via these changes in conductance. Surprisingly, these types of tests require very little from the patient. Charles Lieber of Harvard University says, “A nanowire array can test a mere pinprick of blood in just minutes, providing a nearly instantaneously scan for many different cancer markers” (11). In the future, it may be possible to attain a greater understanding of cancer metastasis through the use of nanowires. While nanomedicine shows promising potentials for medical applications, some basic problems still must be considered. Many nanoparticles, like CdSe, are toxic and can accumulate in different organs, especially the liver. Since NPs are so small, they can travel freely throughout the body and pass through the blood brain barrier. Thus, it would be difficult to regulate where they go in the body after they have been delivered to the target cells. As of now, the accumulation cannot be controlled, and it is uncertain whether the NPs will be eventually excreted (12). Before these NPs enter the pharmaceutical market, their metabolic pathway in the body must be thoroughly studied; otherwise, a new poison could be introduced into medicine where the detriments outweigh the benefits. Since it is still a fledgling field mostly in the research stage, nanomedicine may also pose a cost issue; it is difficult to say if nanomedicine will be affordable to the general public. It could be too expensive, or cause other complications even if it is affordable and raises life expectancy. Perhaps our best option is to give the field more time to develop before deciding whether nanomedicine should be widely implemented. References 1. Barnard, A.S. Nanohazards: Knowledge is our first defense. Nature Materials 5, 245-248 (2006).

FIGURE: Sensitivity of Imaging in Cancerous Mouse (a) GFP infected cancerous cells imaging (b) Multibead quantum dot imaging (13) 6. Loo, C., Lin, A., Hirsch, L., Lee, M., Barton, J., Halas, N., West, J., and Drezek, R. Nanoshell-Enabled Photonics-Based Imaging and Therapy of Cancer. Technology in Cancer Research & Treatment 3, 33-40 (2004). 7. Yang, K., Zhang, S., Zhang, G., Sun, X., Lee, S., and Liu, Z. Graphene in Mice: Ultrahigh In Vivo Tumor Uptake and Efficient Photothermal Therapy. Nano Letter 10, 3318–3323 (2010). 8. Nanodevices - National Cancer Institute. National Cancer Institute - Comprehensive Cancer Information. Retrieved from http:// www.cancer.gov/cancertopics/ understandingcancer/nanodevices 9. Vashist, S.K., Tewari, R., Bajpai, R.P., Bharadwaj, L.M., and Raiteri, R. Review of Quantum Dot Technologies for Cancer Detection and Treatment. Journal of Nanotechnology Online. (2006). Retrieved from http://www.azonano.com/ Details.asp?ArticleID=1726 10. Zandonella, C. The tiny toolkit. Nature 423, 10-12 (2003).

2. Editorial. Nanomedicine: A matter of rhetoric? Nature Materials 5, 243 (2006).

11. Harvard University. Nanowires Can Detect Molecular Signs Of Cancer, Scientists Find. ScienceDaily, (2005). Retrieved from http:// www.sciencedaily.com/releases/2005/09/ 050923153551.htm

3. Service, R.F. Nanoparticle Trojan Horses Gallop From the Lab Into the Clinic. Science 330, 314-315 (2010).

12. Hett, A. Nanotechnology: Small matter, many unknowns. Swiss Re, (2004).

4. Amiji, M.M. Nanotechnology for cancer therapy. (2007).

13. Gao, X., Cui, Y., Levenson, R.M., Chung, L.W.K., and Nie, S. In Vivo Cancer Targeting and Imaging with Semiconductor Quantum Dots. Nature Biotechnology 22, 969-76 (2004).

5. Ferrari, M. Experimental therapies: Vectoring siRNA therapeutics into the clinic. Nature Reviews Clinical Oncology 7, 485486 (2010).

SPRING 2011

15


INTERVIEWS

Interview with Haim H. Bau, Ph.D.

INTERVIEWS device. This is what we mean by the “bio-nano interface.” There are mechanical components that we need to make synthetically, but we also would like to take advantage of the natural characteristics of the molecules. Let me give you an example: suppose we would like motors to carry cargo from right to left. These motors walk on tracks of actin filaments, and they tend to walk in a particular direction: the myosin V motor always walks towards the plus end and myosin VI motor walks always towards the minus end. Unfortunately nature doesn’t always cooperate with us, and doesn’t arrange the filaments so they all face in the same direction. If we want to make a synthetic device with all the cargo moving in one direction, we need to align the filaments so they all face in one direction as well. And this brings nanotechnology into play.

What are the advantages of using nanotechnology in this sort of research, as opposed to traditional molecular biology methods?

Dr. Bau is currently a Professor and Undergraduate Chair of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. Prof. Bau is a Fellow of the American Society of Mechanical Engineers and received a Presidential Young Investigator Award (1984). His research interests are in nano- and macro-fluidics. Can you describe your work with molecular motors? Our interest in molecular motors comes from two different directions. One is that we are looking at molecular motors as mechanical devices. We are mostly interested in the myosin V family, which are processive motors. These are two-legged walking machines, and we would like to understand how they walk and their mechanical properties. They also can carry cargo (their responsibility within the cell), which points to our second topic of interest - utilizing the molecules as shuttles.

Would this technology be implemented as a biological or mechanical system? It would be a combination of a biological and synthetic 16

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

We are in the domain of single molecules, and these molecules are somewhat small so you have to use the appropriate tools to position them and study them. For example, we suspend a filament across two electrodes, and a motor carrying a bead is traveling along it. We can then see the xyz coordinates of the bead, which tell us the trajectory of the motor. This experiment generated a very convenient motility assay; it gives us an example of the trajectory of the motor and we see that it “swings around” the filament. If we had the filament sitting flat on a surface, we would not be able to get this kind of motion – it would be essentially restricting the motion of the motor.

Can you elaborate on some of the techniques you use in your research? We looked at constructing a device where we used motors as shuttles, and as I described earlier, we had to find a way to align all the filaments in the same direction, not only parallel to each other but with the plus ends on the same side of the filament. What we did was to functionalize a surface with biotin-BSA. Then we add streptavidin and lay down filaments with gelsolin on the ends that are also functionalized with biotin. They are connected to the surface through the streptavidin. The filaments tend to stand erect above the surface and we can induce flow to pull them down. We can then introduce a suspension of motors. The motors land on the filaments and in the presence of ATP they start “walking.” That way we can monitor them and make sure all of them are processing in the same direction. We are also thinking about looking at the fact that the filament network and motors are the building blocks of the cell. Perhaps we can “add” one component at a time to get a better understanding of the functionality of the cell. So most


INTERVIEWS current projects are taking a cell and trying to go from a very complex device, or “factory”, if you will, to understanding the function of individual components. The thinking here, which is admittedly extremely ambitious and somewhat speculative, is to add one component at a time and see what kind of functionality we can get from them. We could conceivably develop some functionality that is available in a cell.

How did you get involved in this type of research? Will you be looking at other molecular motors in the future? Well, it’s a long story. We started by developing electrostatic means to nanoposition carbon pipes, and it occurred to us at some point that we could perhaps use the same techniques to position macromolecules. So we suggested this to Yale Goldman [Professor, Physiology Department] and he agreed. We tried it with the positioning of actin filaments using electrical fields, and to our great luck it worked the first time around so we were sold on the project. We are looking at the family of myosin processive mo-

Academic Excellence. Professional Success.

tors. There are two types of molecular motors, some are of the type like myosin II that are essentially anchored to actin in the muscles, which is the type that I am not involved in, but the processive motors are quite diffuse. We don’t necessarily have an interest in any particular motor, but are using it as a means towards an objective of discovering the biological structure of the motor itself.

Do you think that the role of nanotechnology in nano-bio research will continue expanding in the future? Yes, there is no question that it will. It is part of a general trend to bring physical sciences and engineering into the field of biology. Some years back these were completely separate fields, and in the last 20-25 years there has been an ongoing convergence of people in the physical sciences getting more and more involved in biological processes and recognizing that many of the skills and tools that were developed in physics and engineering are applicable to biology. Interview by Emily Xue

Dedicated to: • Academic Excellence • Quality Patient Care • Professional Leadership Degree Programs include: • Doctor of Chiropractic • Master of Science in Acupuncture • Master of Science in Acupuncture and Oriental Medicine • Master of Science in Applied Clinical • Nutrition (online delivery) • Master of Science in Human Anatomy & Physiology Instruction (online delivery) For more information call NYCC at 1-800-234-6922 or visit www.nycc.edu. Finger Lakes School of Acupuncture & Oriental Medicine of New York Chiropractic College School of Applied Clinical Nutrition 2360 Route 89 Seneca Falls, NY 13148 SPRING 2011

17


INTERVIEWS

Interview with Karen I. Winey, Ph.D.

Prof. Winey is currently a Professor of both Materials Science and Engineering and Chemical and Biomolecular Engineering at the University of Pennsylvania. Since joining Penn in 1992 she has developed research expertise in polymeric materials that include ion-containing polymers, nanotube-polymer composites, and block copolymers with a specialty in materials manipulation and morphological characterization. Prof. Winey received a National Science Foundation Young Investigator Award (1994), was elected a Fellow of the American Physical Society (2003), and received a NSF Special Creativity Award (2009-2011). Her research involves structureproperty relationships in polymers such as ion-containing polymers, ionomers, and polymer nanocomposites.

18

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

What kind of research do you do at Penn? The research that we do, broadly speaking, is in material science. But the material we are most interested in is polymers. Within my polymer-containing research group we work on both ion containing polymers and polymer nanocomposites. Ion containing polymers have ions attached to them, covalently bonded into the polymer structure. Those ions will assemble and form nanoaggregates. Ion containing polymers are not typically thought of as nanotechnology, but there are a lot of interesting, new things happening in the field because they are used for a number of energy applications. If your focus is on nanotechnology, you are probably more interested in our work on polymer nanocomposites, which involves taking nanoparticles of different types and embedding them in a polymer matrix. Macroscopic polymer deposits are very common. Reinforced fiberglass in hot tubs, for instance, is basically an epoxy resin in glass fibers. Those are macroscopic fibers inside a polymer matrix. Continuous fiber graphite deposits in golf clubs and tennis rackets – those are all types of composites as well. The only difference with nanocomposites is that the composites are nanoscale. We use two classes of particles: we work both in carbon nanotubes – single wall or multiwall carbon nanotubes – and silver nanowire.

What kind of research do you do with carbon nanotubes and silver nanowire? We work on fabricating nanocomposites. For example, we research how to get the nanoparticles inside the plastic in a controlled way. We look at how the morphology and structure are important for mechanical properties, thermal conductivity, electrical conductivity and flammability. In general, we research the fabrication of the composites, the methods for the fabrication of composites, and all different properties of the composites. Our applications methods have been patented, but it’s unknown how our fabrication methods will be scaled up. We are most interested now in the electrical properties of polymers. Most plastics are insulating, but many times you want a polymer that is electrically conductive, such as electric shielding in electronics and light wave conductors. We are researching the feasibility of conductive plastics and are trying to connect experimental research with theoretical work.

What do your students do? When we are ready to make composites, we start by obtaining high quality nanoparticles. If we buy carbon nanotubes, we start by purifying them. We must remove all metal particles and amorphous carbon with a type of wet chemis-


try that involves washing them with acid and applying heat treatment. Then we make a polymer solution and precipitate the nanocomposites by pouring the solution into a nonsolid. This process results in a carbon nanocomposite “crumble” that we hot press into a uniform shape.

What kinds of tests do you do with nanocomposites? We measure their electrical properties with current/voltage curves. As a matter of fact, something we just found was published in Advanced Functional Materials. All electrical conductivity in polymer nanocomposites is based on percolation. If the particles are close enough to one another that they touch, then particles are conductive and you get a good conductive pathway across the sample. The electrons basically go from one particle to the next and never have to go through the insulating polymer. If you are below the percolation threshold, then the particles are not touching and the polymer becomes insulating. If you are above the threshold, then the particles are touching and you get a polymer nanocomposite that is very conductive. The difference in how much they touch can change the electrical conductivity by 5 orders of magnitude. Polymer nanocomposites are usually used as insulators or conductors based on their given compositions. For example, if you add .2 weight percent carbon nanotubes into polymer, you can make the polymer conductive. Carbon nanotubes are a bit messy because they can be conducting, semiconducting or metallic. But silver nanowire is always conductive, so we work with that as well. We found that by adding silver nanowires, you can switch the polymer nanocomposites from a conductive state to an insulating one. We made a silver nanowire composite that is near the percolation threshold, so if you apply a small voltage, it looks insulating. However, if you apply a higher voltage to the same silver nanowire composite, it switches from being insulating to being conductive. The difference in the electrical conductivity can range from 100 to 10000. The polymers are about the size of a matchstick, about 2 cm by 2 mm by 2 mm. If you only ever apply small voltages, it’s insulating. But if you apply a higher voltage, it can become conductive. We call it resistive switching and we saw it for the first time in bulk silver nanowire composites. We are now trying to see if it only happens in polystyrene or in other polymers. We don’t know why it’s happening, but we will try gold, platinum, palladium or copper nanowire. Is there something special about the polystyrene or silver? What happens if we use polymethyl methacrylate or polycarbonate?

What are the difficulties of working with polymer nanocomposites? Every batch of carbon nanotubes is different. Different suppliers make nanotubes in different ways. We developed

a versatile purification method, but we have to tweak it for every new batch of nanotubes. This is especially important now that we are measuring electrical properties, because residual catalysts from making the nanotubes can affect the conductivity. On the other hand, we make our own silver nanowire – and they take a very long time to make. We make them with our own template because we haven’t found a supplier of such nanowires. A lot of the literature is not reliable and results are difficult to repeat because the purity of the nanowire and nanocomposites differ from experiment to experiment. The purity of the starting materials often changes the results we get. Another factor is dispersion, the special relationship of the particles in the plastic. Obviously if they were all stuck in the bottom it would be terrible dispersion, but there is no good measurement of dispersion. There is no experiment you can perform or value you can obtain to quantify dispersion. When making a composite, each step affects the level of dispersion, so it’s difficult to compare results from one group to the next. Their composites are probably different and the details of the dispersion are also different. Dispersion is easy to quantify with macroscopic polymers. You can use an optical microscope to see the orientation of the fibers, and you can count the fibers because you can see them. But when they are a couple nanometers in diameter, it’s really hard to quantify where the particles are and which direction they are pointing. We work only on cylindrical shaped molecules and we have to worry about the orientation and center of mass of the nanowires or nanotubes. Orientation of these nanofibers matters a lot, and it’s harder to control the fibers when they are nanoscale.

What is the most exciting thing about working with nanofibers? I think the most exciting thing about the field is that it is expanding. There are a lot of new horizons all the time, there is a lot to be done, and that is very exciting. We are not doing the same experiments over and over again or trying to get the 4th decimal place on a number. Students are very engaged with research when we are constantly discovering new and exciting things. Whenever they get on board with nanotechnology, they are at the forefront. We encourage them to make contributions to push that forefront forward. Which courses do you teach? I teach MSE 220, Structural Materials. During the Spring, I used to teach EAS 210, Introduction to Nanotechnology, but I just launched a new graduate course, MSE 500, Experimental Methods. The last lab of the course asks, “what is the size of a gold nanoparticle?” and describes methods we can use to determine it. Interview by Kevin Zhang

SPRING 2011

19


Research Articles RESEARCH ARTICLES

An Examination on the Controls of Wind Magnitude and Orientation on Dune Migration and the Formation of Interdune Stratigraphy Claire C. Masteller, Cornelia Colijn / University of Pennsylvania

The stratigraphic record of aeolian bedforms can provide valuable information regarding bedform migration patterns and their controls. At White Sands National Monument transverse dunes migrate via cyclic deposits from a dominant wind source. These deposits are subsequently preserved in the stratigraphic record of the interdune area. Both qualitative and quantitative studies have been carried out at White Sands; however, a quantitative relationship between grain fall deposits and wind speed and orientation has yet to be established. In order to investigate this, stratigraphic sections of three barchan dunes were collected from interdune areas in order to track dune migrations occurring around the year 1985. Records of wind magnitude and direction from Holloman Air Force Base were analyzed and values for sediment flux capacity were calculated. These deposits were subsequently categorized based on wind orientation in relation to specific types of strata and put into a time series. We expect that a critical threshold for wind magnitude will arise for the appearance of grain fall deposits in the observed strata in relation to the entrainment of grains into suspension. The development of a relationship between wind orientation, magnitude, and stratigraphic type can offer a further degree of clarity regarding dune movement and the formation of stratigraphy. If this method proves successful, it will allow for the rough reconstruction of wind conditions through analysis of aeolian bedform stratigraphy for which corresponding wind records are unavailable.

1. Introduction

White Sands Dune Field is a collection of actively migrating dunes located within the Tularosa Basin in southern New Mexico, which was derived from the deflation of Pleistocene Lake Otero (1). Situated between the San Andres and the Sacramento Mountains, White Sands is the largest gypsum dune field in the world. The deflation of salt crystals from the nearby Alkali Flat and Lake Lucero during the late Quaternary period has provided the source of gypsum sand necessary for the formation of this dunefield. Dune crests are oriented transverse to the dominant SW wind, which is responsible for the bulk of sediment transport and the main control of dune migration direction (2). Winds from the N or NW will strike dunes obliquely or longitudinally, resulting in an “along-crest� migration that also contributes to overall dune movement (2). Dune migration is recorded in the stratigraphic record through the occurrence of vertical deposition while dunes migrates horizontally. As the dune advances, sediment is eroded from the stoss side of the dune and deposits on the lee side. The resultant stratigraphic record is composed of the uneroded fraction of lee face deposits. Some erosion of the stoss side of the bedform is necessary for migration, and thus, only a few centimeters from the base of the dune is preserved in the interdune area (1). There have been numerous prior studies of dune migration and stratigraphy at White Sands (2). These studies of the stratigraphy of aeolian bed forms have successfully quantified dune migration speeds. However, they have not examined the resolution at which the strata records individual sediment transport events. This study aims to find a relationship between wind magnitude and orientation and the grain fall laminae observed stratigraphic segments utilizing an entirely new method. This method concentrates on the calculated mathematical relationships between wind magnitude and direction and the amount of sediment transported. It is expected that a threshold value of wind magnitude exists for the formation of grain fall deposits. This will in turn further the understanding of how wind events can be recorded in the stratigraphic record and allow for past dune migrations to be studied in further detail. These studies may in turn lead to further predic20

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

tions for future wind conditions based on knowledge of past climates in comparison to ancient deposits.

2. Methods

2.1 - Trenching In order to select ideal trenching sites, aerial photos from 1985 were compared to LIDAR data taken in 2009. Individual barchan dunes that had migrated over one dune length in the 24-year span were identified. Three of these sites were subsequently selected and trenches about 7 m long, 0.5 m wide, and 0.5 m deep were dug in the interdune area behind the dune at its maximum concave curvature (Figure 1). Dunes migrate by cyclic deposits of grain flow, grain fall, and wind ripples. These cyclic deposits can be used in order to correlate strata, quantify the rate of sedimentary processes, and help to determine environmental variables. The individual sediment deposits seen in the strata were then identified as grain flow, grain fall, or wind ripples as per Hunter (3). Grain flows appear as light, loosely packed tongue shaped deposits that are often relatively thick in comparison to surrounding deposition (Figure 2a). They are thought to be formed by avalanches that occur when sediment at the crest of the dune exceeds the angle of repose, which is on average 33° (4). It is worth acknowledging that not all grain flows will make it to the base of the dune, so only the largest in magnitude will be recorded in the record. At White Sands, these will result when shear stress is applied normal to the dunes via a prevailing SW wind. Grain fall laminae are characterized by darker, thick packets that flare out as they approach the base of the lee slope (Figure 2b). As flow travels up the stoss side of the dune, it is compressed, increasing the shear stress that is acting upon the grains within the bed allowing for more grains to be entrained in suspension (3). As the flow surpasses the crest it becomes unconfined and slows down. Suspended grains can no longer be supported by the reduced shear velocity and will drop out and deposit at the base of the lee face. At White Sands, grain fall is related to high magnitude SW winds.


RESEARCH ARTICLES transported sediment over the dunes. Sediment flux capacity, or the maximum amount of sediment that could be transported, was then calculated using a specific variation of Bagnold’s relationship from Namikas and Sherman (8, corrected from 9). Jerolmack et al. (10) also used this relationship to successfully predict grain motion ripples at White Sands. Any wind event falling below the critical shear velocity did not transport any sediment, and thus, were not observed in the physical record. All other values above critical displayed different values based on the magnitude of the wind event. It is worth acknowledging that this sediment flux measurement is an upper boundary and is reflective of maximum transport capacity. In reality, the entire bed will not be mobile due to external boundary conditions including moisture in interdune areas that may create a cohesion factor limiting sediment supply and producing lower values for sediment flux than calculated here. FIGURE 1. Identifies the maximum concave curvature point on a barchan dune and demonstrates the positioning of each trench relative to this point.

FIGURE 2. Identifies the three types of strata identified at White Sands and used in this study through both photographs and sketches. (a) Grain flow, (b) grain fall, and (c) wind ripple.

Wind ripples are thin, compact packets thought to occur when winds blow parallel to the crest line, which will in turn create ripples that will move laterally across the lee face (Figure 2c). Thus, these ripples tend to be erosive and will exhibit a change in dip angle, or beveled edge, upon occurrence (5). At White Sands, these are associated with winds from the NW. 2.2 – Calculating sediment flux capacity using wind records Wind records from Holloman Air Force Base from 1982-1988 were analyzed to estimate potential sediment transport at White Sands. This range of values encompasses our target year of 1985 while still accounting for any temporal error that may arise from incorrect trench positioning. Measurements of wind velocity (m/s) and orientation (degrees) were taken every hour by anemometers 10 m above the surface. Values for shear velocity, the velocity of wind against the duneface, were then calculated as per Bagnold (6) for each recorded wind event. The subsequent values for shear velocity are then be compared to a critical value above which motion is initiated as per Shao and Lu (7). This will help to determine which wind events

2.3 - Differentiating between deposit types through wind data On average, dunes migrate to ENE with a trigonometric angle of 21.7° (11). We assume that this is normal to observed dominant wind conditions from the SW. Winds were separated into three categories: transverse, oblique, and longitudinal, based on wind orientation. Transverse winds are defined as those that hit dune crest lines at 90-70° angles. Oblique winds are defined as those that hit dune crest lines between 70-10°. Longitudinal winds are defined as those that hit dune crest

FIGURE 3. Identifies and describes that categories of wind events and their behavior relative to the dune crest. Shows the range of wind orientations encompassed in each category relative to the dune migration direction depicted by the red arrow.

lines between 10-0° relative to the crest line (Figure 3) (12). For the purpose of this study we ignore windows blowing opposite to the dominant wind direction for they will not create deposits that will be preserved in the stratigraphic record. We consider them to be mainly an erosion factor that may affect the thickness of the observed deposits but do not believe they will contribute any additions to the observed record. 2.4 – Correlating calculated sediment flux values to observed results We then plot sediment flux capacity against time with each type of wind event differentiated by color. We correlate the largest values for qs produced from the wind records with grain fall events. We do this based on the knowledge that in order to entrain grains in suspension, as associated with grain fall deposits, we must have higher shear stress values applied on the grains (3). These shear stress values will be associated with the highest wind speeds from SPRING 2011

21


Research Articles RESEARCH ARTICLES the transverse direction of all of the winds recorded 2.5 – Calculating horizontal velocity necessary for deposition as grain fall In order to be preserved as grainfall laminae, the grains that fall out of suspension will have to land at the base of the lee slope. In order to determine the horizontal velocity with which a grain would move to travel the necessary distance to reach the base of the lee face we first calculate vertical settling velocity with the equation derived by Ferguson and Church (13) We can then apply this value, w, the velocity with which the grain falls vertically to a relationship of length and height to solve for horizontal velocity. It is important to acknowledge that we as-

allowed us to successfully match the events between all three of the observed trenches. 3.2 – Calculating sediment flux capacity using wind records It was calculated that the shear velocity exceeded critical and sediment transport occurred 423 times out of 53,071 recorded events. This is about 0.79% of the record from 1982 to 1988. If each event were fully and individually recorded, there would be 423 sediment deposits preserved in the stratigraphic record. However, we do not believe this to be the case. Some grain flows will not reach the base of the lee side of the dune or that some wind ripples were too small to be detected or recorded. It is also assumed that some erosion has occurred and some smaller deposits have been lost to those processes. We assume that the stratigraphic sections we observe do not record sediment transport events at full resolution.

3.3 – Differentiating between deposit types through wind data It can be seen that out of all wind events above critical, 55.84% of them fall in the transverse bin, 39.89% in the oblique bin, and 4.27% in longitudinal. We make the assumption that only transverse winds will produce the grain fall and grain flows observed in the stratigraphic segments, while oblique and longitudinal winds will result in wind ripples (12). Thus, about 55.84% of each section should be grain flows and grain falls, while the remaining 44.16% is wind ripple. sume that the grain is moving at a constant speed as it falls, which While this distribution is not directly reflected in the stratigraphic record, it provides important insight as to the controls on FIGURE 4. Depicts the relationship between velocity and disthe formation of the stratigraphy. Thickness of each deposit is tance in the x and y directions for a barchan dune with the angle determined by a relationship of sediment transport by wind and of repose of 33° vertical deposition rate. Both grain falls and flows have high deposition rates, as seen by the relative thickness of the observed may not be the case because of a possible deceleration due to flow deposits. separation. However, wind ripple deposits are extremely small because they are continuously being eroded by the same wind that forms them. Thus, even though the wind data suggests that there is a 3. Results high percentage of wind ripples, this may not be indicative of their 3.1 – Trenching It was found that Trench 1 showed a markedly different dis- size relative to grain flows and falls due to differences in forming tribution of events than Trenches 2 and 3 (Table 1). This leads us factors. However, when individual wind ripples are compared to to believe that we may be at a different place in time in the first one another, it can be seen that larger wind events do correspond trench. Specifically, the first few records in the trench are substan- with large packages of wind ripples within the observed stratitial wind ripples that do not appear in the other trenches; how- graphic columns. This in turn provides a greater confidence in the ever, grain falls appear in all three with similar spacing. Thus, we efficacy of the method that has been utilized to study the relationconclude that the origin of the first trench may be at an earlier ship between these deposits and wind as a forming factor. place in time that the other two. Trenches 2 and 3 have similar distributions, which allows us to make the conclusion that these 3.4 – Correlating qs values to observed results It was observed that grain fall events are only identified in the two trenches are from the same period in time. Our findings have record about 7% of the time. We attribute this to the fact that the shear velocity necessary to entrain sediment in suspension is % Grain% Wind % Grainfall relatively high and does not occur often at this site. It was obflow Ripple served that grain falls occurred twice in Trench 1 and four times in Trench 1 55.2 37.9 6.9 both Trenches 2 and 3. We place the observed record in the year 1984 with high confidence, in which two maximum sediment flux Trench 2 86.8 6.6 6.6 values were observed in the wind records (Figure 5). In order to Trench 3 87.5 5.4 7.1 place the observed grain falls into the time series with wind values we must observe two high magnitude wind events form the TABLE 1. Shows the distribution of events between grain- transverse direction in close proximity to one another. To furfall, wind ripple, and grain flow observed in each of the ther solidify our point in time we also identify a high magnitude trenches as a percentage. longitudinal event ten to twelve events prior to these. This will 22

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH


RESEARCH ARTICLES translate into a wind ripple package in the observed record separated from the observed grain falls in each trench by a similar number of deposits. We observed this specifically in February of 1984. We can only identify two wind events with confidence because at least two events were observed in all three of the trenches. We cannot justify correlating more than two events to Trench One, which only exhibits two grain falls. Thus, the critical wind magnitude that we observe may be an overestimate. This could be the case because there could have been smaller winds that formed the other grain falls observed in Trenches 2 and 3, but since these do not appear in trench one we choose to not include them. Here, we correlate the largest grain falls in Trenches 2 and 3 to those Event

Time

#1 #2

Velocity (m/s)

U*

Qs

2/17/1984 16.10

0.56

0.181

2/25/1984 14.30

0.50

0.139

TABLE 2. Shows the days of the wind events identified as grain falls and their corresponding maximum velocities, shear velocities, and sediment flux capacities.

observed in Trench 1 to place the trenches at the same point in time. For this study we place the critical velocity necessary to cause a grain fall deposit at 14.30 m/s. This assignment is based on the tie points created between the observed record and the wind records. We can say this with confidence because if this threshold were any lower more than two grainfalls would occur in close proximity to one another, which is not observed in the trench records. This value can be used to differentiate between grain flows and grain falls in the stratigraphic record. 3.5 – Calculating horizontal velocity necessary for deposition as grain fall Settling velocity for the median grain size of 0.5 mm was calculated to be about 2.4 m/s using Equation 5. Using this value Equation 6 was then used to solve for the horizontal velocity necessary for a grain to travel 11 m horizontally and fall 7 meters vertically. It was calculated that us = 3.8 m/s. This value is low in comparison to the threshold value we have set using the observed grain fall deposits and wind records. This may be because the wind records used were measured at 10 m above the bed, whereas, the velocity that we calculate here should occur at a lower elevation. The standard velocity profile in close proximity to the bed shows that velocity increases with elevation. It is necessary to acknowledge that we have not taken into account the effects of acceleration and deceleration due to flow compression up the stoss side of the dune and separation over the lee side of the dune respectively (14). While this may have some effect on the velocity profile of a dune (14), we assume the effect to be minimal for our study.

4. Discussion

Based on our findings, it is evident that wind speed and orientation exhibit a strong relationship to the type of deposit within the

FIGURE 5. (a) Sediment flux capacity values calculated with Equation 4 using winds from Holloman Air Force Base from the years 1982-1988 . (b) Zoomed-in version of the same graph concentrating on early 1984, qs values marked with 1 and 2 identify the sediment transport events correlated with observed grain falls. Dotted line represents inferred threshold for grain fall events.

stratigraphic record. We have shown with confidence, that large magnitude winds in the transverse direction directly correlate with grain fall laminae. Our position in time was further substantiated by an additional observation made between a longitudinal wind event and an observed wind ripple package. These occur in a similar position relative to grain fall events in both the wind record and the stratigraphic record, allowing for the placement of the observed trenches in February of 1984. This shows that this particular method for comparison of stratigraphic deposits and the wind events that form them can allow for accurate identification and comparison for the first time. We acknowledge that we have only identified two-grain fall events. This value is equivalent to the value observed in the first SPRING 2011

23


Research Articles RESEARCH ARTICLES trench, but is less than the value observed in both the second and third. It should be noted that the events that we attribute to these deposits were are variable magnitudes and occurred within periods of 3-4 hours. It is possible, that one large magnitude event, such as the four-hour period on February 17, 1984 in which shear velocity exceeded critical, produced multiple grain fall deposits. It should be noted that during these high magnitude events grains are moving by creep, saltation, and in suspension. As grains are constantly moving over the crest in suspension and raining out, a number of grains are still collected at the crest by creep. Eventually, these grains will exceed the angle of repose and a grain flow avalanche will occur. If this flow reaches the base it will also be preserved in the stratigraphic record. If the wind continues after this avalanche, grains may continue to rain out via suspension; thus, another grain fall deposit may be created during the same wind event. Thus, large events such as the one identified above, may have the ability to produce both grain falls and flows within the record. We have successfully identified specific wind events within the stratigraphic sequences from the interdune area. We have also been able to calculate a critical value for grain fall at White Sands National Monument. The threshold value for sediment flux capacity in order for grain fall to occur is about 0.139, which is equivalent to 14.30 m/s as a velocity (Figure 5b). However, while this method was hugely successful, this is the first study to employ it and thus it is important to refine the results found here with further studies.

5. Conclusion

We have demonstrated with confidence that specific wind events can be associated with different types of stratigraphic deposits at White Sands National Monument. The methods described above can be applied to similar aeolian environments in which bedforms migrate via a dominant wind direction. However, while we have calculated values of wind velocity that we deem to correlate with grain falls, there are many other conditions to consider before establishing a critical threshold wind magnitude for the formation of grain fall laminae at White Sands. Further work should be done in order to verify the threshold wind magnitude that we have observed in relation to grain fall deposits. This can be done using a similar method as stated above with higher accuracy in fieldwork and wind measurements. It may also be done on a smaller scale with a carefully designed wind tunnel experiment. However, these options are not explored here. Once this value is calculated, it can be used to determine if multiple grain fall deposits are the product of one high magnitude wind, or if individual events correspond to each grain fall lamina. This knowledge may then allow for the reconstruction of wind magnitude and orientation for grain fall deposits recoded in rock records and may help to reconstruct wind patterns for past aeolian environments. With a further knowledge of wind events during past climates, these future studies can also allow for predictions to be made for the respones of wind to the changing climates of the future.

6. Acknowledgements

We thank R. Ewing (Princeton) for assistance in the field, R. Martin for help with wind record analysis, C. Phillips for Figure 5, and D. Jerolmack for all of the patient help that he has provided.

24

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

7. Resources

1. Fryberger, S. G., (2009). Geological Overview of White Sands National Monument. National Park Service. URL http://www.nature.nps.gov/geology/parks/ whsa/ geows/index.htm 2. Kocurek, G., Carr, M., Ewing, R., Havholm, K., YC, N., AK, S., (2007) White Sands Dune Field, New Mexico: Age, dune dynamics and recent accumulations. Sedimentary Geology 197 (3-4), 313. 3. Hunter, RE. (1977). Basic types of stratification in small eolian dunes. Sedimentology, 24, 361-387. 4. Kocurek, G and Dott Jr., R.H. (1981) Distinctions and uses of stratification types in the interpretation of eolian sand. Journal of Sedimentary Petrology, v. 51, 579595. 5. Hunter , RE and Richmond, BM. (1988) Daily cycles in coastal dunes. Sedimentary Geology; v. 55, 43-67. 6. Bagnold, R., (1941). The Physics of Blown Sand and Desert Dunes. Chapman & Hall, London. 7. Shao, Y., Lu, H., (2000) A simple expression for wind erosion threshold friction velocity. J. Geophys. Res. 105 (22), 437-443. 8. Namikas, S., Sherman, D. J., (1997) Predicting aeolian sand transport: Revisiting the white model. Earth Surface Processes and Landforms 22 (6), 601-604. 9. White, B., (1979) Soil transport by wind on Mars. J. Geophys. Res. 84, 46434651. 10. Jerolmack, et al., (2006). Spatial grain size sorting in eolian ripples and estimation of wind conditions on planetary surfaces: Application to Meridiani Planum, Mars. Journal of Geophysical Research 111 (e12), E12S02. 11. Martin, R.L., Personal Communication 12. Ewing, R.C., personal communication. 13. Ferguson, R. I., and M. Church. 2004. A simple universal equation for grain settling velocity. Journal of Sedimentary Research 74, no. 6:933-937. 14. Frank, A and Kocurek, G. (1994) Effects of the atmospheric conditions on wind profiles with Aeolian sand transport with an example from White Sands National Monument. Earth Surface Processes and Landforms; v. 19, 735-745.


RESEARCH ARTICLES

Neural Precursor Cells Enriched for GABAergic Neurons and Their Effects in Peripheral Nerve Injured Rats Solorzano, R., Jergova, S., Gajavelli, S., Sagen, J. / University of Miami Miller School of Medicine, The Miami Project Chronic neuropathic pain in clinical practice continues to show inconsistent responses to drug therapy. A loss of spinal inhibitory neurons, which normally serve to limit pain, has been linked to cause neuropathic pain. Previous studies have shown that the intraspinal transplantation of primary neurospheres that contain numerous inhibitory GABAergic neurons can relieve pain in rodent pain models. The goal of this study is to test the hypothesis that the intraspinal injection of neuronal precursor cells (NPCs) will reduce pain behavior and mitigate the dorsal horn neurons responses to peripheral stimulation after a peripheral nerve injury. Neuropathic pain was induced in rats by one-sided chronic constriction injury (CCI) of the sciatic nerve. One week following CCI, animals experiencing hyperalgesia received NPCs on the same side as the CCI. Controls received an intraspinal injection of phosphate buffer saline (PBS). Animals with NPC transplants showed relief of mechanical and thermal hyperalgesia by the first week after transplantation. Electrophysiological data was obtained using an electrode inserted in the lumbar region to detect extracellular compound action potentials after electrical stimulations to the paw. Dorsal horn neurons of NPC-transplanted animals showed decreased action potentials in response to repeated electrical stimulation compared with PBS treated animals. These studies demonstrate that inhibitory neuronal replacement therapy offers an alternative strategy in the combat against chronic neuropathic pain.

1. Introduction

In many cases, injuries to the nervous system develop into a chronic pain that dramatically reduces productivity and quality of life in, otherwise, healthy individuals. This type of pain is classified as neuropathic pain and is defined as being “initiated or caused by a primary lesion or dysfunction in the nervous system” (1). Neuropathic pain is common as a direct result of cancer on peripheral nerves (3), radiation injury, or surgery, and as much as 7% to 8% of the population is affected (2, 7). The specific cause is still unknown and results randomly. However, it has been concluded that abnormal peripheral nerve activity can initiate a cascade of neuroplastic events that change neurons and their function via new experiences. This change can promote neuronal hyperexcitability (4) and thus cause hyperalgesia, an increased sensitivity to pain. In the past, there have been several attempts to develop therapies to combat this disease. Numerous drug therapy studies have been carried out, but have had poor responses due to the fact that it is difficult for pharmacological agents to pass the blood-brain barrier. One possible solution would be to use cellular-transplantation therapy. Transplanted cells could deliver therapeutic agents to specific locations, produce new therapeutic agents over long periods, and remain localized, which decreases their side effects. Although there are several contributing factors to neuropathic pain, one of the factors that has been studied particularly in is the diminish of GABA released by GABAerigic producing neurons from a peripheral nerve injury. It has also been known that the telecephalon region in embryonic rats contains several GABAergic producing cells and several neural precursor cells that can differentiate into GABA producing cells (6). It has been previously shown that GABAergic cells derived from the human teratocarcinoma cell line (hNT) have also shown promise in reducing both excitotoxic spinal cord injury pain and spasticity when transplanted intrathecally or intraspinally (5). Therefore, the current study tests the hypothesis that the intraspinal injection of neuronal precursor cells (NPCs) will reduce pain behavior and mitigate the dorsal horn neurons responses to peripheral stimulation after a peripheral nerve injury.

2. Methods

Neuropathic Pain Model Rats were anesthetized and the common sciatic nerve was exposed on one side at the mid-thigh level. Four chromic gut ligatures spaced about 1 mm apart were loosely tied around the sciatic nerve, constricting the nerve to a barely discernable degree. Following surgery, the skin was closed with wound clips. Perfusion and Immunohistochemistry Transplanted rats were perfused with saline and paraformaldehyde in PB buffer. Lumbar spinal segments were dissected, postfixed overnight, and transferred to sucrose-PB. Lumbar spinal cords were then cryostat-sectioned at 40 μm and incubated overnight in: anti-GABA and anti-VIAAT and Anti-GAD-67 and DAPI. Sections were washed and incubated in secondary antibody solutions. Then, sections were washed and coverslipped with antifluorescent mounting media. Cell Culture E14 Sprague-Dawley rat embryos were isolated. The rats’ cortical lobes and underlying lateral ganglionic eminences were removed, pooled, centrifuged, and resuspended. Tissues were dissociated by trituration and single-cells suspension was collected. Dissociated neural precursor cells (NPCs) were pelleted, resuspended in growth medium, and plated. NPCs were allowed to grow and forma neurospheres (NS). P0 NPCs were then used for transplantation or processed for immunocytochemistry. Immunocytochemistry NS were pelleted, resuspended, plated and coated with poly-Lornithine and fibronectin. NS were allowed to grow for three days. Plated NS cultures were fixed with paraformaldehyde and blocked with goat serum. Primary antibodies were added and allowed to incubate overnight. Secondary antibodies were then added and incubated at room temperature for 1 hour. Transplantation at T13-L1 Rats were anesthetized and a midline incision was made on SPRING 2011

25


Research Articles RESEARCH ARTICLES dorsal skin to expose the lumbar vertebrae. Laminectomy was performed aseptically on Th13-L1 vertebrae. 3 ul of cells (100,000 cells/ul) were loaded into a syringe, and injected into left lumbar gray matter by using a stereotaxic stage. PBS was used as a control. All transplanted rats received cyclosporine A from -1 day until sacrifice. Heat Hyperalgesia FIGURES 2. A transversal section of the rat spinal cord, stained Rats were placed beneath an inverted clear plastic cage on for the GABA producing enzyme with the GAD-67 marker and the an elevated glass floor and a radiant heat source beneath the nucleus of the cells with the DAPI marker. A reduction in GAD-67 glass was aimed at the plantar hind paw, which activated a timer. was observed on the injured side. Withdrawal latencies were the length of time between the acpresent (Fig 3). tivation of the heat source and the hind paw withdrawal from The GABA is marked red and the neural precursor cell are the glass (normal baseline ~10 sec). Testing was alternated on both hind paws for 3 trials at least 30 sec apart, the average values used marked green (Fig 3). Transplanted cells were checked one week after transplantation on a transversal picture of the spinal cord for statistical analysis. to determine cell differentiation type. Mature neural cells stained with Neu-N green and GABA stained with red marker produced a Electrophysiology Animals were positioned in a spinal frame under anesthesia. combined yellow color (Fig. 4a,4b). T11-L2 laminae were removed and a 5 Îźm diameter electrode was inserted into the spinal cord. Needle electrodes were inserted into the foot and a stimulus of an amplitude 10-40V was applied with frequency of 0.1Hz for 10 seconds than at 1Hz for a 10 seconds. Spikes from the dorsal horn were counted at poststimulus latencies of 0.20ms, 40-300ms, 300-500 ms after each stimulation. Data was organized by dividing the high frequency stimulation interval by the first interval of low frequency.

3. Results

Immunohistochemical results from a transversal spinal cord section with the peripheral nerve injury model depicted a decrease in red color, the marker for GABA, on the injured side as oppose to those on the control side. (Fig 1)

FIGURE 3. In vitro micrographs, photograph taken through a microscope, of neurospheres used for transplantation. GABA stain on the left, then neural precursor cells stained with MAP-2, and then on the right the overlay of both pictures.

The data from heat hyperalgesia was organized by subtracting the sensitivity between the injured paw and the control paw. A statistically significant reduction was found in the third week. (Fig. 5)

FIGURE 1. A transversal section of the rat spinal cord immunohistochemically stained for GABA substances with the GABA marker and GABAergic producing cells stained with the VIAAT marker.

A similar transversal section was stained and was found to display a significant decrease in glutamic acid decarboxylase (GAD), the enzyme that produces GABA, on the lateral injured side as appose to the control side (Fig 2). Cultured cells from the cortical lobes and underlying lateral ganglionic eminences immunocytochemically stained were found to contain both GABA and neural precursor cells to be 26

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

FIGURE 4. (a) A transversal section of the spinal cord with a 3Îźl injection of neural precursor cells. The mature neural cells are stained with Neu-N green and the GABA cells are stained with the GABA marker red. The overlay of both colors produces a yellow glow. (b) Amplification of box A.


RESEARCH ARTICLES

FIGURE 5. Compared differences between the sensitivity of the injured paw versus the control paw for the CCI group and the NPC group for every week. Groups were injured after the base week and the transplantations were performed after the first week. NPC transplants displayed a significant reduction in the third week.

FIGURE 6. Responses of dorsal horn neurons at different intervals to high frequency stimulations in the control group after injury. Late-C fiber responses(>300 ms) displayed the only increase in sensitivity out of the three groups to peripheral stimulation.

FIGURE 7. Responses of dorsal horn neurons at different intervals to high frequency stimulations in the NPC group after injury. Late-C fiber responses(>300 ms) displayed decrease in sensitivity after the first and second week of NPC transplants.

FIGURE 8. Responses to dorsal horn in the >300ms late C-fiber responses in both the saline control group and the NPC group. The differences became significant in the second week after transplantation.

In the electrophysiological data, no differences post 1, 2, or 3 weeks were found between the 0-20ms Aβ-fiber interval and the early C-fiber 40-300ms interval. (Fig. 6)(Fig. 7) However, the late C fiber responses (>300ms) did display differences. Therefore, these results were individually plotted on a graph and statistically analyzed. Results between the control group and the NPC group became statistically significant in the third week (Fig. 8).

outer regions of the spinal cord, a desired result because the pain receptors are found in that region. Further, in Figure 3 it was observed that cultured neurospheres contained neural precursor cells and were producing GABA. This result assured that a good cell line was being used and could have the potential to differentiate into GABAeric cells that could raise GABA levels again in the spinal cord. Therefore, these cells were transplanted and observed to be differentiating into mature GABAergic producing cells (Fig 4). This result depicted that the cells were not differentiating into cells that could be harmful to the spinal cord, such as glial or astrocytes, but rather into desired mature neural cells. The heat hyperalgesia results in Figure 5 depicted that the pain model was creating hyperalgesia in the injured paw. After transplantation, it was observed that there was a moderate decrease by the first week. However, by the second week a statistically significant reduction was observed in the group with the NPC trans-

4. Discussion

Although there is only a moderate degree of reduction in GABA in Figure 1, a significant decrease in glutamic acid decarboxylase (GAD), the GABA producing enzyme, can be seen in figure 2. This result does not confirm that there was neural cellular death, however, it assured the assumption that a peripheral nerve injury does decrease GABA levels in the spinal cord and asserted that the pain model was accurately being performed. It was also observed that the reduction in GABA and GAD-67 was only on the

SPRING 2011

27


Research Articles RESEARCH ARTICLES plants. Electrophysiological data depicted that the pain model had no affect on the Aβ-fibre 0-20ms range and the early C-fiber responses 40-300ms range. However, it was observed that in the control group the peripheral nerve injury raised late C-fiber action (>300ms) potentials significantly through out the entire three weeks (Fig. 6). This result speculates that neuropathic pain is mainly transmitted through late C-fibers responses. Yet additional studies would need to be conducted to confirm this result. A reduction was observed in late C-fiber responses when comparing the control group with the NPC transplants (Fig. 7). The late C-fiber gropus were individually graphed and statistically analyzed (Fig. 8). A significant reduction was observed in the post two-week transplantation group. This results parallels the statistical post-two week transplantation reduction in pain found from the heat test. The reduction of sensitivity from NPCs transplants in both hyperalgesia and reduction in action potentials when compared to control groups’ leads to help suggest the possibility of the hypothesis being true. Further studies should be conducted to confirm this result and studies should be conducted over longer periods of time to assure that the transplantations effects do not decrease. Although this result depicts that the NPC transplants are reducing neuropathic pain, it is not known if the production of GABA is causing the effect or that the cells are reducing inflammation in the injury. This would require further study and could be undertaken by performing the peripherial nerve injury, injecting NPC transplants, and then injecting bicuculline, a GABA antagonist. If the action potentials and the pain from the heat rose again then it would depict that the GABA is at least partially responsible for the decrease. In this study, embryonic precursor rat cells from E14 rat embryos were used that effectively perform their duty. However, this cell source creates several complications if one was to apply it to clinical usage. It is difficult to obtain several of these neural precursor cells and after a certain period of time the cells lose there neurogenic potential. To obtain such cells in humans, several aborted fetuses would be required, a process that is unpractical, inefficient, and controversial. However, the possibility to cultivate these cells from induced pluripotent stem cells exist and should be explored.

5. References

1. Bogduk, Nikolai; Merskey, Harold (1994). Classification of chronic pain: descriptions of chronic pain syndromes and definitions of pain terms (2nd edition ed.). Seattle: IASP Press. pp. 212. ISBN 0931092051. 5 2. Bouhassira D, Lantéri-Minet M, Attal N, Laurent B, Touboul C (June 2008). “Prevalence of chronic pain with neuropathic characteristics in the general population”. Pain 136 (3): 380–7. doi:10.1016/j.pain.2007.08.013. PMID 17888574 7 3. Meredith A. Wampler, PT, DPTSc and Ernest H. Rosenbaum, MD, Chemotherapy-induced Peripheral Neuropathy Fact Sheet, Retrieved on 29 December 2008 retrieved from http://www.cancersupportivecare.com/ nervepain.php 11 4. Davies SN, Lodge D (1987) Evidence for involvement of N-methyl-Daspartate receptors in “wind-up” of class 2 neurons in the dorsal horn of the rat. Brain Res 424:402-406. 13 5. Eaton MJ, Wolfe SQ, Martinez M, Hernandez M, Furst C, Huang J, Frydel BR, Gómez-Marín O (2007) Subarachnoid transplant of a human neuronal cell line attenuates chronic allodynia and hyperalgesia after excitotoxic

28

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

spinal cord injury in the rat. J Pain 8: 33-50. 14 6. Furmanski O, Gajavelli S, Lee, JW, Collado M, Jergova S, Sagen J (2009) Combined extrinsic and intrinsic manipulations exert complementary neuronal enrichment In embryonic rat neural precursor cultures: an in vitro and in vivo analysis. J Comp Neurol 515:56-71. 15 7. Torrance N, Smith BH, Bennett MI, Lee AJ (April 2006). “The epidemiology of chronic pain of predominantly neuropathic origin. Results from a general population survey”. J Pain 7 (4): 281–9. doi:10.1016/j. jpain.2005.11.008. PMID 16618472. 6


RESEARCH ARTICLES

Performance of ZigBee PRO Mesh Networks with Moving Nodes Chester Hamilton / Texas A&M University Varun Sampath / University of Pennsylvania Radio modules based on of the ZigBee PRO specication provide cheap and low power wireless communication, via usage of the IEEE 802.15.4 PHY and MAC layers and a network layer mesh routing protocol. In this paper, we describe various implementation aspects of ZigBee PRO and present performance data on a ZigBee PRO mesh network for point-to-point and multi-hop transmission using XBeePRO ZB modules. In particular, we present ndings on network performance when a node is constantly moving and changing routes. We found that in the worst case where packets are transmitted immediately after the old route is no longer physically possible, extreme packet loss occurs as the network cannot perform route maintenance operations in time. With more gradual movement, however, the network operates without packet loss.

1. Introduction

ZigBee networks are used in a variety of applications, such as home and building automation and sensor networks (1). In comparison to Wi-Fi or Bluetooth, the ZigBee protocol is designed for usage that requires lower power consumption and can sacrice bandwidth (11). The ZigBee Specication provides support for developing applications and services for low power nodes in a variety of network topologies, such as mesh or star networks. This paper focuses on the discussion of networks with a mesh topology. In such a conguration, every node can potentially connect and have a one-hop path to every other node. This is in contrast to a star network, where network transactions must pass through the central node. In ZigBee terminology, a “coordinator” node is responsible for setting up the mesh network, and a “router” node can connect to more than one node to facilitate paths. “End devices” are lower-power nodes that can only associate with one router node orthe coordinator node (1). The ZigBee network layer species the protocol for joining and leaving networks as well as various routing techniques. In the latest standard, a new specication called ZigBee PRO was created that changed network layer operation. ZigBee PRO removed the use of cluster-tree routing and introduces source, or many-to-one, routing. Addresses are also assigned randomly instead of being based o of tree topology (4). As the ZigBee modules we will use for testing, XBee-PRO modules, implement the PRO specication, we will focus on their specic network layer, and in particular the mesh routing implementation. The mesh routing algorithm is based o of the Ad-hoc On- Demand Distance Vector Routing Algorithm (AODV) (9). The remainder of this introduction discusses how this algorithm works. 1.1 Overview of Mesh Routing in the ZigBee NetworkLayer The ZigBee mesh routing algorithm is based off of the premises of AODV. We shall now proceed to discuss the basics of the implementation of this algorithm in the ZigBee PRO specication, which introduces a focus on symmetric links and eliminates the tree routing mechanism. We assume a network of only a coordinator and routers. If end devices were present, their parents would handle all routing work, as they do not have that capability. AODV and the ZigBee mesh routing algorithm can be described as “ondemand.” This implies that each node may not necessarily know routes to all other nodes, nor will it always participate in the discovery or failure of allroutes. The only routes that are refreshed

periodically are those between a node and all of its neighbors, i.e. routes one hop away (7). These routes are maintained via one hop broadcast messages called link status messages (1). The failure to receive acknowledgements from a certain number of link status messages constitutes a link failure, which will be discussed later in this section (7). Each of the ZigBee routers and the coordinator node contains a neighbor table and a routing table. The neighbor table contains entries for all nodes within one hop. Each entry also contains a value for outgoing link cost, which is a measure of how good the link is based on the probability of packet loss in transmission. The routing table contains entries corresponding to the dierent destination nodes it knows how to reach. Routing a message from a source node to a destination node requires a path. The source node first checks the neighbor table for a destination match and then the routing table for a valid route to the destination. If there is no route in the table and a discovery for this route is not already underway, a path discovery process is initiated. The path discovery process begins with the source broadcasting a request for a route to all of its neighbors. The cost of the path is stored in the route request packet. Each neighbor that receives the packet updates that path cost with the cost of its links, stores the address of the node it received the packet from, and then rebroadcasts the route request if it is not the destination and if it does not know of a better path to the source than the path the route request took. This latter condition ensures that only the shortest paths are used, minimizing network congestion. If the node receiving the route request is the destination, then a route reply is created. The route reply packet contains a new measure of path cost from destination to source. The route reply is then unicasted to the source via the previous-node addresses that have been stored in each node’s memory along the way. Each node that receives the route reply packet updates its memory with the cost of the path to the destination and updates the packet with the cost of its link to the next node. It then sends the route reply packet up to the next node. The source node in the meanwhile is waiting for these route reply packets to return to it within a certain timeout. While it may receive many route reply packets, the source node will only save the path with the lowest path cost. By the end of the path discovery process, the source node has in its memory the shortest path to the destination. Route maintenance procedures in ZigBee mesh routing is necessary if a link breaks. Link failure is detected when a node fails to receive several link status messages from its neighbor. If link SPRING 2011

29


Research Articles RESEARCH ARTICLES failure occurs during the forwarding of a packet, a network status command frame is sent back to the source node of the packet. The source node then removes the routing table entry for that node (1). The design of a mesh network in this matter has many advantages. Nodes do not have to store paths to all other nodes, and for the paths they do store, they only need the next hop address, since each hop will know the next hop in the path. Not only does this provide memory savings, but also gives the network flexibility when dealing with link failure. The sacrifice is the path discovery overhead, but this is mitigated by the propagation of only the shortest paths.

2. Problem Description

Our goal is to evaluate the performance of a mesh network consisting of XBee-PRO modules, particularly with moving nodes. Our initial hypothesis was that the constant moving of nodes should aect ZigBee mesh routing operation. We believed that movement at high enough speeds could cause frequent link failures, which would negate the benefits of routing tables as new paths would be needed. In the worst case, links would have to be repaired via path discovery upon every transmit, which could signicantly increase latency. We conducted experiments measuring performance data for point-topoint and multi-hop networks, as well as two tests with a moving node to test our hypothesis.

3. Related Work

4.1 Point-to-Point Experiments We first conducted experiments on one-hop packet transmission from a coordinator module to a router module approximately 1.5 meters apart. Since we have only one-hop transmission, we specied the “NH” parameter in the firmwares of the XBee modules to be 1 to minimize unicast timeout. We used “synchronous” and “asynchronous” transmission methods in different experiments. In synchronous transmission, the program would not request another packet transmission unless an ACK frame had been received for the previous transmission. In asynchronous transmission, the program did not wait for ACK frames and requested another transmission after a specied delay. The receiving router module was connected to a PC that veried the data payload of each packet. In each iteration of an experiment, we sent 1000 128 byte packets. We conducted 3 iterations of each experiment and averaged the results (for 27ms- and 28ms-delayed asynchronous tests, we performed 4 iterations to catch transmission errors). 4.1.1 Synchronous Transmission with API Router In this experiment, we collected transmission latency, error count, RSSI, and throughput measurements when transmitting 1000 packets from an API coordinator module to an API router module. The transmissions were synchronous, so transmission would only continue if an ACK frame was received for the previous transmission. This experiment was conducted to have a performance baseline for a ZigBee network.

4. Experimental Setup

4.1.2 Synchronous Transmission with AT Router This experiment was conducted in the same manner as the previous, except using AT firmware instead of API rmware for the router. As the XBee API only works with XBee modules having API firmware, we could not verify the received packet’s data payload contents using a Java program. This experiment only served to compare the performance of the different firmware versions.

Previous works (6, 8) have done simulation and experimental performance analysis of the IEEE 802.15.4 PHY and MAC layers. Additionally, work has been done in analyzing ZigBee network performance in star and pointto-point congurations (2). We could not nd research on analyzing ZigBee multi-hop performance or performance with moving nodes. Since we are also interested in the performance of ZigBee networks for use in UAVs, we felt it necessary to mention SensorFlock. A team at University of Colorado, Boulder developed SensorFlock, “an airborne wireless sensor network of micro-air vehicles,” to study toxic plume dispersion (5). SensorFlock entails a set of five micro-air vehicles, each weighing less than 500 grams. Each plane contains an autopilot board with a XBee-PRO module. Instead of using the ZigBee network layer, the team implemented their own routing algorithm that appears to based off of AODV. They performed measurements of received signal strength indication (RSSI), for air-to-air, air-to-ground, and ground-to-ground communication, and found air-to-air communication to have the highest RSSI as distance between nodes was varied. Packet loss was also found to increase with distance more rapidly with ground-to-ground communication over air-toground communication. They did not measure multi-hop performance, however, as all vehicles were within distance of the ground control station. To conduct our experiments, we used XBee-PRO ZB modules from DigiInternational (model XBP24-ZB). Each module had an RPSMA connector with a 2.2dBi duck antenna attached. We had one coordinator node and several router nodes. The coordinator was running Digi’s latest API rmware (version 2170). The router nodes were either running the latest API rmware (version 2370) or the latest AT rmware (version 2270). We shall address the dierences between API and AT rmware in our analysis of the experimen30

tal results. All nodes were set to use the same PAN ID and to have a baud rate of 115200. All measurements were conducted after our control network was fully formed (conrmed via the XBee module’s Join Verication setting). We used FTDI USB to Serial integrated circuits to interface the modules with x86 PCs. Data packets were always sent by the coordinator module to a single router module. To send packets and perform measurements, we wrote Java programs that used Andrew Rapp’s open source XBee API (10) to interface with the module. Our benchmarking code is also open source and is available at http://github.com/wjwwood/au-proteus/tree/master/xbee-api/src/com/GCS/xbee_test/. Our benchmark logs are also available at the root folder of this github repository. When conducting experiments, we used packets with the maximum data payload size. For a ZigBee PRO network using the mesh routing protocol, this is 84 bytes. The packets with control overhead, though, have a size of 128 bytes. The data payload consisted of a sequence number and a repeated constant value. We describe the total transmission time as the elapsed time starting immediately before sending a packet and ending when an acknowledgement frame (ACK) has been received. Throughput is calculated as the total packet size (128 bytes) divided by the total transmission time. We declared an error when we failed to receive an ACK frame for a transmission or when the received packet had an inaccurate sequence number.

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

4.1.3 Synchronous Transmission with API Router and no APS ACK


RESEARCH ARTICLES In order to see the impact of ACK packets on network performance, we disabled application support sub-layer (APS) ACK packet transmission. This is a unicast ACK packet that travels from the destination node to the source node, and is requested by the ZigBee APS layer and delivered by the ZigBee network layer. Due to API firmware, the source node still receives an ACK frame after the unicast timeout. 4.1.4 Asynchronous Transmission with API Router We used asynchronous transmission with varying delay times to measure the impact of transmission delay on error rate. We delayed the transmission thread by 1ms intervals from 31ms to 20ms. The delay does not include the time to send the transmit request frame through the serial link to the source node. When collecting asynchronous data, the latency was redefined as the time elapsed in just sending the packet itself.

router node between the two stationary routers every two synchronous packet transmissions. Each packet transmission was initiated manually at the coordinator module. This experiment illustrated the worst case scenario where a node “instantaneously” moved to a different part of the network in between transmissions. 4.3.2 Walking Test In our other moving node experiment, we walked the moving node through a path while synchronous packet transmission was constantly occurring. The path was always out of range of the coordinator, and started with being in range of only one router. The middle of the path was in range of both routers, and the end of the path was in range of only the other router. This test was meant to emulate a more real world scenario of a moving node.

5. Results and Discussion

In this section, we present our experimental data in tabular 4.2 Multi-Hop Experiments and graphical forms, along with our interpretation of the data. We measured multi-hop performance to compare results with both point-to-point measurements and measurements with mov- 5.1 Synchronous Point-to-Point and Multi-Hop Transmission ing nodes. To ensure multi-hop transmission, the destination routTable 1 shows our collected data for synchronous transer module had its antenna removed and was moved away from the missions in both point-to-point and multi-hop congurations. source coordinator until no signal could be established. A middle router node (with AT rmware) was then placed at this point, which 5.1.1 Point-to-Point Performance Analysis reestablished the signal. The NH parameter was also tuned to Avg. TransError (# of RSSI (dBm) Throughput allow a greater unicast timeout. We measured the performance Setup mispackets) (kbps) of synchronous packet transmission to the end router module sion Time (ms) two hops away. We conducted three experiments, each with 4.1.1 39.65 0 -41.33 25.67 three iterations of 1000 synchronous packet transmissions. 4.2.1 Multi-Hop Synchronous Packet Transmission This experiment provided results for baseline multi-hop peformance, with 1000 packets being synchronously transmitted two hops away. 4.2.2 Multi-Hop without APS ACK This experiment was conducted with no APS ACK packets as comparison to baseline multi-hop network performance.

4.1.2

31.86

0

-39.00

31.90

4.1.3

22.06

0

-42.33

45.96

4.2.1

58.62

0

-84.33

17.44

4.2.2

42.13

9

-86.67

24.56

4.2.3

62.53

0

-81.67

16.39

4.2.3 Multi-Hop without 16-bit Addresses In this experiment, we sent all packets without explicitly specifying the 16-bit network address of the recipient in the ZigBee Transmit Request frame. Such a practice could either force address discovery or an address table lookup. We conducted this experiment to look at those effects. 4.3 Moving Node Experiments We finally conducted two moving node experiments to witness the performance loss with a multi-hop network. Our test network consisted of a coordinator module, two stationary router modules, and a moving router module. All modules had antennas except for the moving router. The two stationary routers were placed in perpendicular hallways. This placement was made such that when the moving router was next to one of the stationary routers, it could not establish a link with the other router. The moving router could also not establish a one hop link to the coordinator module. 4.3.1 Explicit Packets In the first moving node experiment, we moved the end

FIGURE 1. Data from Point-to-Point and Multi-Hop Synchronous Packet Transmission Experiments

The data for the point-to-point transmission is a baseline standard for our mesh network’s performance. As the transmissions are synchronous and the coordinator receives acknowledgements, there is no queuing delay and no potential for packet loss. The total transmission time is made up of several other components, however. The most dominant components of the total transmission time are the transmission time of the ZigBee Transmit Request frame to the source module via the UART and the transmission time of the ZigBee Receive Packet frame to the PC from the receiving module via the UART. The serial connection has a bandwidth of 115,200bps, so it is the bottleneck in data transfer (as the physical layer has a bandwidth of 250kbps). As our ZigBee transmit frames are 102 bytes in size, and the ZigBee receive frames are 100 bytes in size (3), UART transmission for each ideally takes 7.1ms and 6.94ms, respectively. Additionally, reception of the ZigBee Transmit Status frame (i.e. the ACK frame) via the UART ideally takes 0.76ms. Transmission time of the packet, assuming an ideal 250kbps physical layer, is 4.1ms. Transmission time of the ACK packet could be as low as 0.32ms. Our expected total SPRING 2011

31


Research Articles RESEARCH ARTICLES transmission time is then 19.22ms, leaving us an error of 20.43ms. We can attribute this time difference to error (such as bandwidth being lower than specied), and the node processing time that we currently cannot quantify. The measurements using an AT router somewhat validate our notion that UART delay is the dominant transmission time factor. There is a latency difference of 7.79ms with the previous measurement. In an XBee module with an AT APS layer, the XBee module is programmed with AT commands and appears as a simple serial communication link to a connected device (e.g. PC, microcontroller). On the other hand, XBee modules with API firmware communicate with frames, encapsulating all data for more advanced features and simpler programmability (3). Since the AT router does not encapsulate the received packet in a frame, the UART receive overhead drops from 6.94ms to 5.93ms (the time to just transmit the 84 byte packet payload). However, this only explains 1.11ms of the difference. The rest of the discrepancy could lie in decreased processing time for AT routers over API (which has frame checksumming and encapsulation (3)). Our results in point-to-point packet transmission without APS ACK packets provides more insight into the timing of the system. Ideally with such a setup, the ZigBee Transmit Status frame should be received immediately after the packet has been transmitted via the radio. This then implies a total transmission time of 12ms, which is a discrepancy of 10ms from the experimental data. However, since we do not involve the receiving node in calculations when no APS ACK packets are present, we can isolate that transmission time period by comparison with our previous measurements. Doing this shows that 17.6ms is needed in processing and receiving the ZigBee Receive Packet frame and transmitting an APS ACK packet. An additional interesting result is that our results reported no transmission errors. We can explain this by the lack of queueing time. Since we only transmit after receiving a Transmit Status frame, the transmission queue will always be empty and thus not drop packets. 5.1.2 Multi-Hop Performance Analysis Our multi-hop experiments provide baseline measurements for comparison with the results of the moving node tests. Synchronous transmission over two-hops results in a 32% throughput performance decrease over the analogous one-hop transmission. It is not a full 50% decrease as the middle router module does not output anything on the serial link. The additional latency comes from the transmission and processing time of the data packet and of the ACK packet by the middle router. Additionally, performance loss is expected as the RSSI of the last hop is signicantly worse in the multi-hop experiment than in the one-hop experiment. Multi-hop results without APS ACK packets show our rst sign of packet loss with synchronous transmission. Our total transmission time is also unreasonably higher than the point-to-point data with no APS ACK, which should not be the case as the coordinator node is still doing the same job. However, a closer look at our log data revealed the discrepancy. The packet loss was due to 5 second transmission timeouts, because of link failure with the middle router module. These 5 second timeouts signicantly increased the average total transmission time. However, the median of total transmission time over 3000 packets is 25ms, which is a much more reasonable value. Even this 3ms dierence between the point-to-point data can be explained by the low RSSI, which could result in lower physical layer throughput. The removal of the 16-bit address in the 32

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

ZigBee Transmit Request frame was meant as a measure to force the coordinator to perform address or route discovery operations. Address discovery is necessary because all ZigBee routing is done using 16-bit network addresses instead of each node’s unique 64bit IEEE address. Address discovery involves a broadcast transmission of the 64- bit address until the transmission reaches the correct node, so theoretically it is a slow operation. However, each XBee coordinator and module possesses an address table that it uses in the APS layer to lookup 16-bit addresses from 64-bit addresses. The 4ms increase in total transmission time could then be explained by these lookups instead of constant address discovery. Our log les show that no address discovery was performed, which may conrm this. Since the table is in the APS layer, the network layer can operate with a 16-bit address and avoid route discovery.

FIGURE 2. Throughput vs. Total Transmission Time in Asynchronous Point-to-Point Transmission

5.2 Asynchronous Point-to-Point Transmission This section provides data and analysis for the experiment detailed in Section 4.1.4. As can be seen from Figure 2 as the delay time between sequential transmissions decreases, the throughput of the network increases linearly. This makes sense because as the delay time decreases, more data is sent over the network in any arbitrary block of time. However, as Figure 3 shows, there are almost no errors in data transmissions up until the total transmission time is 27 ms. After that point, the network suers signicant packet loss since the receiving node cannot keep up with the amount of data being sent. Since the delay is initiated after the data is sent across the UART to the transmitting XBee, the delay represents the amount of time it takes for the data to arrive and be processed by the receiving XBee. The data packet was 128 bytes in size, ideally making the amount of time it took to traverse the link 4.1ms. This leaves 20.9ms of unaccounted delay. To determine if there was a queuing delay the equation for trac intensity applies: La=R, where L is the number of bits in a packet, a is the number of packets/s, and R is the transmission rate in bits per second. Thus, with a total transmission time of 27ms (i.e. 37 packets/s), a packet size of 128 bytes, and a transmission rate of 250kbps the trac intensity is 0.14787, which is much less than 1. Thus, we expect no queuing delay and no dropped packets. However, in practicality we observed that there must be queuing delay since packets are being dropped at the receiving node. This can once again be attributed to the fact that the network does


RESEARCH ARTICLES

FIGURE 3: Packet Error vs. Total Transmission Time in Asynchronous Point-to-Point Transmission

not come close to reaching reliable 250kbps transmission rates, thus making the trac intensity peak with less data traversing the network. 5.3 Moving Node Performance Discussion We do not show numerical data in this section, but all of our logs are online as previously mentioned. The results for the experiment mentioned in Section 4.3.1 indicate that data transmission is guaranteed successful only when the mobile node rst associates with the network. After the initial association, when the node is moved, the route from the coordinator to it is no longer valid, and the middle router must respond with a network status command frame indicating a broken link. If the next attempted transmission occurs before all of the routers broadcast their link status messages, the broken link is only discovered after a transmission is attempted and then fails after a timeout. A route discovery is then initiated. However, the packet is never received at the receiving node and the data is ultimately lost. If the destination node continuously hops between routers after each transmission attempt, it could have 100% packet loss. However, if the transition were gradual as the experiment mentioned in Section 4.3.2, the node would “smartly” associate with the closest router and facilitate reliable communication; this is because the neighbor table ages, deleting old entries periodically. It is only when the transition is suciently fast that the neighbor table is not refreshed and the network status command frame (indicating a broken link) is not sent that there is signicant packet loss. According to the XBee-PRO manual, link status messages are sent 3-4 times a minute (3). We veried that no packet loss would occur if the broken link was reported in time by waiting 15 seconds after moving the node before sending another packet. When doing this, we received all packets. Even given this, the maximum throughput of a moving node is slightly less than that of a static multihop network due to the high latencies at the fringes of the router range before the mobile node reassociates with the stronger link.

6. Conclusion

one router to another before the routers send their network status command frames to indicate a broken link. We observed the possibility for 100% packet loss should the movement occur in between neighbor table refreshes. We would also like to note that the performance of our multi-hop network could have possibly been attributed to the low RSSI of the link between the source and middle modules. We can attribute this low RSSI to our indoor setting, as there was a door between the source and middle modules, and the modules themselves were approximately 20 meters apart. However, since ZigBee modules are often used for home automation purposes (1), we find our results relatively disconcerting. In relation to our work with UAV mesh networking, we find through our results that ZigBee mesh routing with the necessity of multiple hops will result in undesirable performance. Alternatives such as many-to-one routing may provide better performance, and may be a subject of future research.

7. Acknowledgments

The authors thank Drs. Saad Biaz and Richard Chapman of Auburn University for their guidance and help in securing materials for research. This work was funded by NSF Award #0851960. 8.

References

1. ZigBee Alliance. Zigbee specication. Technical report, ZigBee Alliance, January 2008. 2. M. Armholt, S. Junnila, and I. Defee. A non-beaconing ZigBee network implementation and performance study. In Communications, 2007. ICC‘07. IEEE International Conference on, pages 3232 {3236, 24-28 2007. 3. Digi International. XBee/XBee-PRO ZB RF Modules, April 2010. 4. Bob Gohn. The ZigBee PRO feature set: More of a good thing. http:// www.embedded.com/design/205100696, December 2007. 5. Ahmad Bilal Hasan, Bill Pisano, Saroch Panichsakul, Pete Gray, Jyh Huang, Richard Han, Dale Lawrence, Kamran Mohseni, Ahmad Bilal Hasan, Bill Pisano, Saroch Panichsakul, Pete Gray, Jyh Huang, Richard Han, Dale Lawrence, and Kamran Mohseni. SensorFlock: A mobile system of networked micro-air vehicles., tr-cu-cs-1018-06, u. of colorado at boulder, 2006. 6. Mikko Kohvakka, Mauri Kuorilehto, Marko Hannikainen, and Timo D. Hamalainen. Performance analysis of IEEE 802.15.4 and ZigBee for largescale wireless sensor network applications. In PE-WASUN ‘06: Proceedings of the 3rd ACM international workshop on Performance evaluation of wireless ad hoc, sensor and ubiquitous networks, pages 48{57, New York, NY, USA, 2006. ACM. 7. Charles E. Perkins and Elizabeth M. Royer. Ad-hoc on-demand distance vector routing. In IEEE Workshop on Mobile Computing Systems and Applications, pages 90{100, 1999. 8. M. Petrova, J. Riihijarvi, P. Mahonen, and S. Labella. Performance study of IEEE 802.15.4 using measurements and simulations. In Wireless Communications and Networking Conference, 2006. WCNC 2006. IEEE, volume 1, pages 487 {492, 3-6 2006. 9. Peng Ran, Mao heng Sun, and You min Zou. ZigBee routing selection strategy based on data services and energy-balanced ZigBee routing. Asia-Pacic Conference on Services Computing. 2006 IEEE, 0:400{404, 2006. 10. Andrew Rapp. xbee-api. http://code.google.com/p/xbee-api/, May 2009. 11. K. Shuaib, M. Boulmalf, F. Sallabi, and A. Lakas. Co-existence of Zigbee and WLAN, a performance study. In Wireless Telecommunications Symposium, 2006. WTS ‘06, pages 1 {6, april 2006. 16

The goal of this paper was to evaluate the performance of moving nodes in a ZigBee PRO mesh network using XBee-PRO modules as the nodes and measuring the maximum throughput and latency of the network. The results indicate that the mesh network does not respond favorably to nodes that are moved from SPRING 2011

33


Research Articles RESEARCH ARTICLES

Biofuel from Plant Fibers: Coculturing of Liginolytic, Cellulolytic, and Fermenting Organisms Vinayak Kumar / University of Pennsylvania

A simplified method of producing ethanol from lignocellulosic raw materials is attempted in this study. Instead of The current methods of lignocellulosic ethanol production involves a number of separate processes such as degradation of lignin, the subsequent digestion of the remaining cellulose into sugars, and the final fermentation into ethanol. The present study attempted to degrade and ferment common lignocellulosic materials to produce ethanol in a simpler and environment-friendly manner. This is accomplished by coculturing of the organism that carry out these functions in the same container. Lignin was degraded with a pretreatment by a ligninase-producing organism that leaves the cellulose virtually untouched. The remaining cellulose was converted into ethanol by coculturing cellulase-secreting organisms and fermenting organisms together on the plant fibers in anaerobic conditions to produce ethanol. In this study, the inexpensive growth conditions used for coculturing both these organisms were determined experimentally. Testing different cellulose sources (sawdust, straw, tissue paper, and newspaper) for the coculture method revealed that the paper sources were readily digested and fermented into ethanol, whereas the lignin-rich materials such as straw and wood need a lignin degradation step to expose the cellulose for enzymatic digestion. This coculturing process is a potential method that, when optimized, can reduce the number of processes and costs involved in ethanol production.

1. Introduction

Global reserves of fossil fuel are rapidly being depleted, raising major concerns for future energy supplies. This has led to great interest in renewable sources of energy. Ethanol represents an alternative source of liquid fuel that can be produced from renewable resources. Ethanol, an oxygenated fuel containing 35% oxygen, can reduce overall greenhouse gas emissions from combustion, and has been proven to be useful for automobiles as a fuel substitute for gasoline (22). The traditional method of ethanol production utilizes common starch sources such as corn, potatoes, and other tubers (2). The production process involves essentially three steps: saccharification, fermentation, and product recovery. Saccharification, is the process in which the complex carbohydrates in the raw material is converted into sugars. In fermentation, the sugars resulting from the saccharification step are metabolically converted into ethanol by microorganisms such as Saccharomyces cerevisiae. Ethanol is then recovered from the fermenting medium by distillation and further concentration. In addition to starch sources, sugars from sugar cane and molasses are also fermented to ethanol in high yield [9]. However, the traditional methods of ethanol production from starch and sugar cane suffer from the high costs of raw materials. Since these materials are also used as a food source for humans and animals, unlimited supplies at low cost may be difficult, as prices change with market fluctuations. Cellulose-containing materials such as wood, paper, straw, and other fibrous plant materials represent a cheap and abundant source of raw material for ethanol production, because they can be digested to yield sugars (4, 14, 18). These materials mainly contain lignocellulose, comprising of lignin, hemicellulose and cellulose. Lignin is a brown colored complex non-carbohydrate polymer. Hemicellulose and cellulose are insoluble polymers of sugars. In the plant cell walls, lignin encloses the hemicellulose and cellulose, and confers structural rigidity to the plant. Wood contains a higher percentage of lignin than softer plant materials. Because of this architecture of the lignocellulosic materials, it is difficult to access the hemicellulose and cellulose for enzymatic hydrolysis, and a separate step 34

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

for lignin degradation is necessary (5). The major saccharification processes for lignocellulosic materials are acid hydrolysis and enzymatic hydrolysis. Acid hydrolysis method uses treatment with sulfuric acid. The enzymatic hydrolysis uses cellulase enzymes produced by some fungi and bacteria that can digest cellulose into sugars. For effective enzymatic activity on the cellulose, the crystalline structure of the lignocellulose must be broken by physical or chemical means to expose the cellulose and hemicellulose to the enzymes. For this purpose, methods such as exposure to high temperature and pressure, milling, freezing, and some chemical treatments are used. The slurry thus prepared is treated with the cellulases to release the sugars. Although the enzymatic method is a slow process and currently the enzyme costs are high (10), the mild process conditions used here are more environment-friendly. The filamentous fungus Trichoderma reesei is well recognized as a good source of high quality cellulases (17). It produces several enzymes such as endoglucanases, cellobiohydrolases, and β-glucosidase, which are involved in the hydrolysis of cellulose into its constituent sugars (6, 8). One strategy for cost reduction in ethanol production is simultaneous saccharification and fermentation SSF (13, 19, 21). The current SSF methods involve the treatment of cellulose with commercially extracted cellulase enzymes in a cul-


RESEARCH ARTICLES ture of fermenting organisms, such as Saccharomyces cerevisiae (1, 12). Still, this involves multiple steps such as separately culturing the cellulase-producing organisms to collect the cellulase enzymes, treating the lignocellulosic materials to degrade the lignin, and finally treating of the prepared cellulose with the extracted enzyme solution and yeast for SSF (10). In this report, a method to further simplify this process by coculturing of three different organisms lignocellulosic materials is described. These organisms include a fungus (Phanerochaete chrysosporium) that produces ligninase that degrade ligin, another fungus (Trichoderma reesei) that produces cellulases to degrade cellulose to sugars, and a fermenting organism namely yeast (Saccharomyces cerevisiae) that ferment the sugars to ethanol. The coculturing of these organisms on lignocellulosic materials as described below has resulted in the digestion of lignin and cellulose and fermentation of the resulting sugars to ethanol.

2. Materials

Trichoderma reesei (RUT-C30) Phanerochaete chrysosporium, Saccharomyces cerevisiae 3, 5-dinitrosalicylic acid (DNS) Ethyl methanesulfonate (EMS) Coomassie Blue Stain Potassium dichromate

3. Methods

Preparation of Culture MediaInexpensive culture media were prepared and tested for optimal growth of the organisms as described below. A natural potato-dextrose broth (PDB) was produced by boiling cut potatoes in mineral water and then autoclaving the collected fluid to ensure sterility. Soil extract was prepared by mixing 0.5 kg soil (collected from beneath the top soil) in 1.0 L mineral water, decanting and centrifuging the extract to remove the sedimentable particles, followed by autoclaving. Yeast extract solution was prepared by adding 10.0 g yeast extract and 5.0 g sodium chloride to 1.0 L of distilled `water and autoclaving. Other media mentioned in the experiments were made by mixing different components and sterilizing by autoclave or filter-sterilization through 0.2 micron filters. Glucose was added to the media from a 20% stock (filtersterilized) as needed in the experiments. Trichoderma reesei and yeast cultures: Trichoderma reesei spores (conidia) were inoculated into PDB media and incubated at 30oC in an incubator-shaker. A drop of this culture was plated onto potato-dextrose agar plates (1.0 % dextrose, 1.5 % agar) and incubated at 30oC for the fungus to grow. Yeast was inoculated into the PDB medium and incubated at 30oC overnight to establish the inoculum for coculturing.

untreated culture was also added with sodium thiosulfate in the same manner. 20 μL of this culture was plated on to PDA plates to determine the death rate of the EMS treated culture compared to the untreated. The rest of the culture was used for selection of ethanol tolerant mutants. Cellulase assay (filter paper assay): Cellulase assay was performed based on the digestion of cellulose to glucose by cellulase and then assaying the relative quantity of glucose produced (3). To two tubes containing 200 μL of the clear medium, 25 μL of 0.5 M sodium citrate (pH 4.8) was added. Cellulose (tiny strips of Whatman filter paper) was added to one tube and nothing was added to the other tube. Both tubes were incubated for 1 hr at 50oC. 70 μL of each reaction was then assayed for glucose. The relative cellulase activity was calculated by subtracting the absorbance at 540 nm of the reaction without cellulose (control) from the reaction containing cellulose. Ethanol assay: Ethanol assay was based on the reduction of acidic potassium dichromate by ethanol. The dichromate ions Cr (VI) are reduced to chromic products Cr (III), changing the color from yellowish to bluish-green (3). The reagent was prepared by adding potassium dichromate to 1% concentration in 6 N sulfuric acid. However, the medium in the present experiments contains glucose and possibly other reducing agents. Directly assaying the solution for ethanol would lead to inconclusive reduction of dichromate. Therefore, to distinguish the ethanol from other reducing agents, a new assay based on the volatile nature of ethanol was developed and used to monitor ethanol formation in the experiment. One side of the upper portion of the Eppendorf tube was cut open and then 120 μL of the dichromate reagent was added to it. Then the tube was hung from the mouth of the flask after fermentation and sealed with parafilm (see Figure 6A). The color change was read

EMS (ethyl methanesulfonate) mutagenesis: 108 T. reesei spores (conidia) counted using a hemocytometer was taken in 5 different 15 mL tubes and resuspended in 1.5 mL of sodium phosphate buffer pH 7.0. In a chemical hood, 30 μL of EMS was carefully added to the spore suspension in four of the tubes and mixed (20, 24). The tubes were tightly capped and incubated in a 30oC incubatorshaker. The four EMS-treated tubes were incubated for 30 min, 60 min, 90 min or 120 min and the EMS was inactivated by adding 5 mL of 5% sodium thiosulfate and then resuspended in PDB. The SPRING 2011

35


Research Articles RESEARCH ARTICLES

FIGURE 1a. Growth of T. reesei in different culture media

FIGURE 1b. Growth of yeast in different culture media

*Media tested: (A) soil extract, (B) A + 0.1% yeast extract, (C) A + 10% potato broth, (D) 25% soil extract, 10 mM NaH2PO4, 10 MgSO4, 100 mM (NH4)2SO4 (pH~5), (E) D + 0.1% yeast extract, (F) D + 10% potato broth, (G) 1% yeast extract

at 575 nm after 2 hours. Fresh medium in a similar flask was the negative control and 1% ethanol in medium was used as positive control.

expected, yeast tolerated much higher ethanol concentrations up to ~10% (Figure 2b), though the growth slowed with increasing concentrations.

Other biochemical assays: Protein assay was done by Coomassieblue protein assay reagent (BioRad) based on the binding of Coomassie blue dye to proteins resulting in the development of a blue color. Glucose assay was performed based on the reduction of 3, 5-dinitrosalicylic acid (DNS) by reducing sugars to 3-amino, 5-nitrosalicylic acid that changes color from yellow to orange red (11).

Random mutagenesis of T. reesei by EMS treatment In an attempt to obtain a strain of T. reesei that is more tolerant to ethanol, a random mutagenesis on the Trichoderma spores was attempted using ethyl methane sulfonate (EMS). The EMS treatment for 60 minutes showed about 50% death of the spores compared to untreated control, and this condition was used for selection of ethanol tolerant mutants by growing the treated spores in 3%, 4% and 5% ethanol. The enriched culture was plated on PDA plates and several clones were selected for culturing at different concentrations of ethanol in media containing cellulose as the only glucose source. No mutants were obtained that showed better tolerance than the original strain (4% ethanol). 5% ethanol showed poor growth. However, one mutant from 4% ethanol culture showed much faster degradation of the cellulose in the medium. This mutant was used for all subsequent experiments.

4. Results

Growth of T. reesei and S. cerevisiae in different culture media Several simple growth media that would be cheap for industrial use were tested for T. reesei and S. cerevisiae. Figure 1 shows the growth of T. reesei and yeast in different media supplemented with or without 1% glucose. The composition of the media A to H is shown in Figure 1. The cultures were set up with the same amount of T. reesei spores or yeast in 2 mL media with or without 1% glucose. T. reesei was grown for 48 hours, filtered, and weighed to measure the biomass since T. reesei sprouts hyphae and forms mycelia (Figure 1a). Yeast was grown separately for 24 hours and the absorbance at 600 nm was measured for comparison of growth (Figure 1b). In each of these media, the addition of glucose enhanced their growth. In all the subsequent experiments, the medium E was used, which showed efficient growth and contains minimal levels of protein so that the cellulase production can be monitored without interference with the protein in the medium. Ethanol tolerance of T. reesei and yeast Although the tolerance of S. cerevisiae to small amounts of ethanol is known, the ethanol tolerance of Trichoderma reesei was not known, since it does not naturally live in such an environment. Therefore both the organisms were tested by culturing the T. reesei spores and yeast in 2 mL of PDB containing various concentrations of ethanol as indicated in Figure 2. Trichoderma was tolerant up to 3% ethanol, with lesser growth at 4% (Figure 2a). As 36

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

Culture of T. reesei in the presence and absence of cellulose In order to confirm the ability of T. reesei to digest cellulose as a source of energy, wild type and mutant spores (107) were inoculated and cultured in 30 mL of the medium E in the absence or presence of cellulose. Tissue paper, soaked in distilled water and autoclaved, was used as the source of cellulose for the initial experiments. Both wild type and mutant fungi grew well in the presence of cellulose and there was very little growth in the absence of cellulose. In protein assays, the cellulose-containing cultures showed higher levels of protein, likely representing more cellulase production stimulated by low levels of cellulose degradation products (diand oligosaccharides), and was much higher in the mutant culture than in the wild type culture (Figure 3a). To confirm that the proteins detected in the cultures were cellulases, the media were assayed for cellulase activity by filter paper assay. In correlation with the protein levels detected in the media, the cellulose-containing


RESEARCH ARTICLES

FIGURE 2. Ethanol tolerance of (a) T. reesei and (b) yeast in 2 mL medium E containing different concentrations of ethanol. The growth of T. reesei was compared based on total biomass after two days culture, and yeast growth was compared based on absorbance at 600 nm.

cultures showed high cellulase activity, more in the mutant culture than the wild type (Figure 3b). Surprisingly, the mutant culture, which showed higher levels of protein, showed very little glucose content, whereas the wild type culture that had lower protein levels showed a high level of glucose (Figure 3c). Both wild type and mutant cultures devoid of cellulose produced very little glucose. When the culture flasks originally added with cellulose were examined closely, it was noticed that the cellulose in the mutant culture had completely disappeared, but the wild type culture was still left with small amounts of cellulose. It was inferred that most of the glucose was consumed after all the cellulose was digested in the mutant culture and was without other energy sources for extended time. This explains the result in Figures 3b and 3c. More cellulose was then added into both cultures and protein and glucose were assayed after one day, when excess cellulose was still there in both cultures. Unexpectedly, the protein levels became very low in both the cellulose-containing cultures, al-

FIGURE 3. The culture medium was collected, cleared by centrifugation and assayed for (a) protein content, (b) cellulose activity, and (c) glucose content after culturing T. reesei RUT-C30 (labeled as wild-type) and the mutant for 5 days.

SPRING 2011

37


Research Articles RESEARCH ARTICLES the cultures as a cellulose source. All these materials were crushed and pressure-cooked to conduct the experiments in sterile conditions. Visual observation of these cultures revealed that the tissue paper (positive control) and the newspaper used as the cellulose source were digested by the mutant T. reesei rapidly. The papers disappeared and the branched mycelia of the well-grown fungus were visible. The straw and the sawdust did not show appreciable digestion, and there was very little fungal growth. In agreement with the observed fungal growth and the disappearance of the cellulose, the cultures containing the tissue paper and the newspaper showed significant levels of glucose compared to the medium alone (negative control) or the cultures containing the straw and sawFIGURE 4. The media in the wild type and mutant T. reesei were dust (Figure 6a). The straw and wood sources were not efficiently assayed for (a) protein and (b) glucose content after adding cel- digested by the T. reesei culture, because these cellulosic sources lulose and incubating for one day. have a high content of lignin, and the cellulose is not accessible to the cellulases produced by the fungus. These materials require though higher levels of protein were observed before the addition a pretreatment to degrade the lignin for direct use in the culture. of fresh cellulose (Figure 4a). The glucose levels were high in both All the four cultures were converted into the coculturing syscultures; the mutant culture had quickly caught up to the levels of tem by addition of yeast, and switched to anaerobic fermentation the wild type culture, further demonstrating the high cellulolytic conditions for one day. The dichromate assay indicated that ethanol capabilities of this mutant strain (Figure 4b). The inverse relation was produced in the cultures containing tissue paper and newspabetween the cellulose content and the detectable cellulase protein per (Figure 6b). The flask containing sawdust and straw produced in the culture supernatant is likely due to the binding of the en- negligible amounts of ethanol. zyme to the cellulose during digestion. Therefore the protein assay of the supernatant in the cellulose-containing culture may not be Pretreatment by P. chrysosporium, culture of T. reesei and gluan accurate measure of the cellulase production. Rather, the rapid cose assay disappearance of cellulose in the mutant culture is indicative of the Spores of Phanerochaete chrysosporium were inoculated into high cellulolytic activity of this mutant compared to the original three flasks in medium E containing pure medium (negative constrain. trol), medium with excess tissue paper (positive control), and meCoculturing of T. reesei and yeast and production of ethanol dium with excess straw as a representative lignocellulosic material. The experiments have thus far shown that simple sugars are The spores grew in these flasks for two weeks, until the color of the produced in excess, indicating that coculture may be supported. straw became slightly pale. This is due to the fact that when lignoThe coculturing of T. reesei and yeast was initiated by adding equal cellulosic sources are delignified, they become whitish. This white amounts of yeast cell suspension into all the four flasks. After one color is the exposed cellulose. Then the same amount of germiday of culturing, the flasks were flushed with nitrogen and sealed nated Trichoderma spores (~107 spores) were added to each of the with parafilm to create anaerobic conditions to see whether ethanol flasks and were then allowed to culture for 5 days, until complex would be produced. After one day of anaerobic culturing, the half- fungal mycelia were visible. Then, a glucose assay was performed on cut Eppendorf tubes containing potassium dichromate described the media giving the following results in Figure 7a. in the methods were inserted and hung from the top of the flask. The results in Figure 7a reveal that the tissue paper culture In the presence of alcohol vapor, the yellow dichromate inside the produced significantly higher levels of glucose than the straw and hanging Eppendorf tube is reduced to bluish-green chromic ions. After two hours, the dichromate turned bluish-green in the cellulose-containing flasks (Figure 5). This assay indicates that alcohol was produced in the cellulose-containing cocultures. The positive control was a flask containing 1% alcohol and was used to compare approximate alcohol content in the cultures in one day of anaerobic fermentation. The negative control was a flask containing fresh medium. When the dichromate reagents in the tubes from the flasks were collected and absorbance at 575 nm was measured, there was more than 1% ethanol in both flasks in comparison with the positive control, but slightly more in the mutant (Figure 5). Culture of T. reesei in the presence of different cellulose sources Spores from the super-active mutant T. reesei (107) were inoculated and cultured in 30 mL of the medium E in the absence or presence of different cellulose sources. Tissue paper used successfully in the previous experiments was used as a positive control. In addition, wood (sawdust), straw, and newspaper were tested in 38

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

FIGURE 5. Co-culturing of T. reesei and yeast and production of ethanol on cellulose. Ethanol assay set up in the different cultures and controls with potassium dichromate reagent in hanging tubes; Color change measured by absorption at 575 nm indicating the production of ethanol. Mutant coculture produced more ethanol than wild type coculture (>1% ethanol).


RESEARCH ARTICLES

FIGURE 6. (a) Generation of glucose in the mutant T. reesei cultures on different sources of cellulose 7 days after inoculation of spores; (b) production of ethanol in the cocultures of mutant T. reesei and yeast in medium E containing different sources of cellulose.

FIGURE 7a. After 2 weeks of ligninase activity by P. chrysosporium, T. reesei was added and cocultured for 5 days in aerobic conditions. This is a glucose assay of the medium after 5 days.

FIGURE 7b. The experiment was repeated, and this is the glucose assay after 0, 6, and 12 hours of coculture.

medium cultures. The straw culture contained almost no glucose. This was an unexpected shortcoming, which may have been due to the competition for resources between the organisms, as they will both consume the released glucose.

The results in Figure 7b indicate that the culture had a significant concentration of glucose at 6 hours, which was greatly diminished by the 12-hour mark. This experiment revealed two important facts about the process: 1) delignification of straw is very slow (the straw had not been completely degraded by the end of the coculture, which should have happened if it were completely delignified) 2) the cellulase enzymes were degrading the cellulose at a higher rate than the ligninase could degrade lignin, and the resulting glucose was consumed by both organisms.

Pretreatment by P. chrysosporium, culture of T. reesei and glucose assay at Timed Intervals To determine whether competition was the reason for the low level of glucose, this experiment was repeated. A new Phanerochaete chrysosporium culture was grown on straw and allowed it to culture for two weeks until a similar color change appeared. Then, the same amount of Trichoderma spores (~107) was added into the culture. However, in this experiment, the supernatant was tested at shorter time intervals for glucose at 0, 6, and 12 hours. The results are shown in Figure 7b.

Fermentation Fermentation was then carried out, despite the low glucose readings. The same amount of yeast (2 grams) was added into the cultures and the flask was sealed using parafilm. The flasks cultured for approximately 30 hours in these anaerobic conditions. Then an SPRING 2011

39


Research Articles RESEARCH ARTICLES

FIGURE 8. Relative concentration of ethanol produced from triple coculture. Medium alone was the negative control and 1% ethanol was the positive control. The straw culture is much less productive than paper culture, but not as low as glucose assay would suggest.

ethanol assay was performed upon the flasks. The results of this experiment (Figure 8) reveal that the medium (negative control) produced a negligible level of ethanol, the tissue paper produced more than 1% ethanol, and the straw produced about 0.5% ethanol. The straw culture yielded much higher concentrations of ethanol than the glucose assay indicated. This apparent abnormality in the results can be explained by the fact that the metabolic systems of the P. chrysosporium and the T. reesei slow down in the absence of oxygen, and therefore consume less glucose in anaerobic conditions than normal, but their respective enzymes are still active in solution.

5. Discussion

The results of the experiments in Figure 1 demonstrate that Trichoderma reesei and yeast can be grown easily in inexpensive media containing minerals and very small amounts of yeast extract, so that culturing conditions can be scaled up in an economic way. T. reesei cultured in cellulose-containing media degrades cellulose and produces excess glucose than is consumed by the fungus for its initial growth as evident from Figures 3 and 4. The presence of cellulose in the culture induces the production of cellulases by T. reesei, stimulated by di- and oligosaccharide products of cellulose degradation in a positive feedback loop. When excess cellulose is present, the cellulase enzymes predominantly remain bound to the cellulose and actively hydrolyze the glycosidic bonds on the cellulose. Although some amount of glucose generated from cellulose is consumed by T. reesei during the initial growth, the addition of yeast and conversion to anaerobic conditions reduced the rate of the fungal growth, and hence it is unlikely that a large portion of the sugars generated may be consumed by the coculture. Although the original attempts of mutagenesis were to obtain a T. reesei strain with higher ethanol tolerance, the isolation of the hyper-cellulolytic mutant strain was interesting and warrants further characterization. The observation that it digests cellulose 40

PENNSCIENCE JOURNAL OF UNDERGRADUATE RESEARCH

much more rapidly than the original RUT-C30 strain needs to be studied in the presence of other cellulase-inducing agents such as sophorose. Coculturing of T. reesei and yeast as shown in Figure 5 can potentially reduce the number of steps and costs for simultaneous saccharification and fermentation for ethanol production from cellulosic sources. Testing the coculture method on different cellulosic sources revealed that the cellulases from T. reesei could readily digest the processed cellulose from waste paper sources. The unprocessed sources such as wood and straw are not digested effectively, because of the high lignin content that prevents the cellulase access to the celluloses (Figure 6). These sources need further processing to remove or degrade the lignin to expose the cellulose. This can be done by chemical means or by ligninase treatment (7, 15). Ethanol was produced from straw after a pretreatment with the P. chrysosporium before coculturing the mixture in anaerobic conditions with T. reesei and S. cerevisiae (Figures 7 and 8), but the low levels of ethanol combined with the great length of time required for the pretreatment (2 weeks), indicates that this triple coculture may not be an economical method of commercially producing ethanol, unless the process is further optimized. However, processed cellulose such as waste paper products may be directly used as feedstock for the coculture method, because contaminants such as printing ink and other paper processing materials did not significantly interfere with the saccharification and fermentation, and readily produced ethanol (16). The present study based on the combinations of coculturing ligninolytic, cellulolytic, and fermenting organisms on lignocellulosic and cellulosic sources demonstrated the feasibility of a simple method of simultaneous saccharification and fermentation to produce ethanol. However, this process can still be improved. Various culturing conditions, the relative amount of each organism, the nutrient content of the medium, and other growth conditions for the organisms must be optimized. Additionally, efficient product recovery methods must be tested. Future studies will be directed


RESEARCH ARTICLES to address these questions to improve the feasibility of producing biofuels from lignocellulosic and cellulosic sources through coculture.

6. Acknowledgement

I would like to thank RheoGene, Inc. for allowing me as a student visitor to do most part of this work at the facilities.

7. References

1. Boubekeur B, Bunoust O, Camougrand N, Castroviejo M, Rigoulet M, Guerrin B (1999) A Mitochondrial Pyruvate Dehydrogenase Bypass in the Yeast Saccharomyces cerevisiae. J Biol Chem 274: 21044-21048. 2. Brehmer B, Bals B, Sanders J, Dale B (2008) Improving the corn-ethanol industry: studying protein separation techniques to obtain higher valueadded product options for distillers grains. Biotechnol Bioeng 101: 49-61. 3. Ghose TK (1987) Measurement of Cellulase Activities. Pure & Appl Chem 59: 257-258.

19. Szczodrak J, Targonski Z (1989) Simultaneous Saccharification and Fermentation of Cellulose: Effect of Ethanol and Cellnlases on Particular Stages. Acta Biotechnol 6: 555-564. 20. Toyama H, Toyama N (1999) Construction of cellulase hyperproducing strains derived from polyploids of Trichoderma reesei. Microbios 100: 7-18. 21. Vásquez MP, da Silva JN, de Souza MB Jr., Pereira N Jr. (2007) Enzymatic hydrolysis optimization to ethanol production by simultaneous saccharification and fermentation. Appl Biochem Biotechnol137: 141-153. 22. Vertès AA, Inui M, Yukawa H (2008) Technological options for biological fuel ethanol. J Mol Microbiol Biotechnol 15: 16-30. 23. Williams MB, Reese HD (1950) Colorimetric determination of ethyl alcohol. Anal Chem 22: 1556-1561. 24. Winston F (2008) EMS and UV mutagenesis in yeast. Curr Protoc Mol Biol Chapter 13: Unit 13.3B.

4. Heaton EA, Flavell RB, Mascia PN, Thomas SR, Dohleman FG, Long SP (2008) Herbaceous energy crop development: recent progress and future prospects. Curr Opin Biotechnol 19: 202-209. 5. Hendriks AT, Zeeman G (2008) Pretreatments to enhance the digestibility of lignocellulosic biomass. Bioresour Technol Epub. Jul 2. 6. Ilmey MN, Saloheimo A, Onnela ML, Penttila ME (1997) Regulation of Cellulase Gene Expression in the Filamentous Fungus Trichoderma reesei. Appl Environ Microbiol 63: 1298-1306. 7. Isci A, Himmelsbach JN, Pometto AL 3rd, Raman DR, Anex RP (2008) Aqueous ammonia soaking of switchgrass followed by simultaneous saccharification and fermentation. Appl Biochem Biotechnol. 144: 69-77. 8. Mach RL, Zeilinger S (2003) Regulation of gene expression in industrial fungi Trichoderma. Appl Microbiol Biotechnol 60:515-522. 9. Martinelli LA, Filoso S (2008) Expansion of sugarcane ethanol production in Brazil: environmental and social challenges. Ecol Appl 18: 885-898. 10. Merino ST, Cherry J (2007) Progress and challenges in enzyme development for biomass utilization. Adv Biochem Eng Biotechnol 108: 95-120. 11. Miller GL (1959) Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem 31: 426-428. 12. Nevoigt E (2008) Progress in metabolic engineering of Saccharomyces cerevisiae. Microbiol Mol Biol Rev 72: 379-412. 13. Olofsson K, Bertilsson M, Lidén G (2008) A short review on SSF - an interesting process option for ethanol production from lignocellulosic feedstocks. Biotechnol Biofuels 1: 7. 14. Schmer MR, Vogel KP, Mitchel RB, Perrin RK (2008) Net energy of cellulosic ethanol from switchgrass. Proc Natl Acad Sci (USA) 105: 464-469. 15. Schoemaker HE, Piontek K (1996) On the interaction of lignin peroxidase with lignin. Pure & Appl Chem 68: 2089-2096. 16. Shao X, Lynd L, Wyman C (2008) Kinetic modeling of cellulosic biomass to ethanol via simultaneous saccharification and fermentation: Part II. Experimental validation using waste paper sludge and anticipation of CFD analysis. Biotechnol. Bioengineering. Online publ 15 Jul. 17. Singhania RR, Sukumaran RK, Pandey A (2007) Improved cellulase production by Trichoderma reesei RUT-C30 under SSF through process optimization. Appl Biochem Biotechnol 142: 60-70. 18. St. Michaels L, Blanch HW (1981) Ethanol Production from Non-Grain Feedstocks. Acta Biotechnologica 1: 351-364. SPRING 2011

41



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.