Broad Street Scientific 2012-2013

Page 1


The North Carolina School of Science and

The North Carolina School of Science and

Volume 2 | 2012-2013 Mathematics Journal of Student STEM Research

Volume 2 | 2012-2013 Mathematics Journal of Student STEM Research

Volume 2 | 2012-2013

The North Carolina School of Science and Mathematics Journal of Student STEM Research

Table of ConTenTs

Broad Street Scientific Staff

A Letter from the Chancellor

A Photosynthetic City: Combining Nature with the Urban Environment

Emmanuel Assa, 2013

Effects of Climate Change on Agriculture

Matias Horst, 2014

Vivek Pisharody, 2014

A Novel Design of Electrode Surface Morphology to Improve Water Electrolysis Efficiency

Jaehyeong Lee, 2013

Multilevel Distance Labeling - A Wireless Network Problem

Tian-Shun Allan Jiang, 2014

The Effect of Substrate Density on the Rate of Migration of NIH3T3 Fibroblasts

Elizabeth Tsui, 2013

Christie Jiang, 2013 2 4 6

Chitosan-Modified Cellulose as Adsorbent to Collect and Reuse Nitrate from Groundwater

Generation of Electricity from the Wind Draft of Cars

Harish Pudukodu, 2013

Shocking Discoveries: The Applications and Putative Mechanisms of the Effects of Electric and Magnetic Fields on Plants

Ian Maynor, 2013

Halobacterium: Mechanisms of Extreme Survival as a Solution to Waste

Isaiah Stackleather , 2013

Alzheimer’s Disease: Current Therapies and Emerging Research

Kanan Shah, 2014

Vivek Pisharody, 2014

Intervertebral Discs and Their Interactions with Different Environments

Jin Yoon, 2013

Effect of Backpack Load on Gait Parameters

Alice Li, 2014

An Interview with Dr. Robert Lefkowitz

Words from the Editors

Welcome to the Broad Street Scientific: NCSSM’s journal of student research in science, technology, engineering, and mathematics. In this second edition of the Broad Street Scientific, we aim to not only showcase student research, but to increase public awareness of the importance of student scientific participation by demonstrating the scientific aptitude of our students to readers both in and outside of the NCSSM community. We hope you enjoy this year’s issue.

The editors have chosen the theme of astrophysics, a fundamental observational science which deals with the study of astronomical objects. We appreciate The Hubble Key Project Team (NASA) for the cover photo, an image of supernova 1994D in the galaxy NGC 4526, and The Hubble Heritage Team (NASA) for the back cover photo, an image of the planetary Ring Nebula. The edition is sectioned by the type of manuscript, which includes papers, literature reviews, essays, and an interview. Each section has a different side bar image of the sun. There are green, orange-red, blue, and yellow images – each color corresponds to images taken with filters of different wavelength. The different colors depict different atmospheric temperatures of the sun, as hotter atmospheric temperatures emit more blue light and cooler parts emit more red light. The back pages of the issue shows the Hubble telescope, which has been for two decades one of the most important instruments to astronomy.

The editors would like to thank the administration, faculty, and staff of NCSSM for the opportunity to pursue our research goals in any of the science, technology, engineering or mathematics fields. The support for student research at this school is unparalleled by any other high school in the state, and the student body would like to recognize the significance of such an investment in our, and the state’s, future. We would like to specifically thank our faculty advisor, Dr. Jonathan Bennett, for his advice and guidance through the second edition of the Broad Street Scientific. We would also like thank our Chancellor, Dr. Roberts, for his active support of this publication. The Broad Street Scientific would like to thank Hun Wong and Pranav Maddi, last year’s chief editors, for their helpful recommendations and also Navina Venugopal, who assisted with the art and cover design. Lastly, the Broad Street Scientific is extremely grateful to Dr. Robert Lefkowitz for his participation in this journal and insight for the next generation of scientists.

Broad Street Scientific Staff

Chief Editors

Publication Editors

Biology Editors

Physics Editors

Chemistry Editors

Engineering Editor

Math and Computer

Science Editor

Webmaster

Faculty Advisor

Halston Lim, 2013 Tejas Sundaresan, 2013

Vincent Cahill, 2013

Addie Jackson, 2014

Andrew Peterson, 2014

Anita Simha, 2013 Wey-Wey Su, 2013

Adam Beyer, 2014 Katherine Whang, 2013

Jason Liang, 2013 Jessica Lee, 2014

William Ge, 2013 Parth Thakker, 2014 Christopher Yuan, 2014

Madeline Finnegan, 2014

Kavi Jain, 2014

Kyle Elmore, 2013

Dr. Jonathan Bennett

Letter from the Chancellor

”Equipped with his five senses, man explores the universe around him and calls the adventure Science.”
~ Edwin Hubble

I am proud to introduce the second edition of the North Carolina School of Science and Mathematics (NCSSM) scientific journal, Broad Street Scientific. Each year students at NCSSM conduct significant scientific research and Broad Street Scientific is a showcase of some of the best research being done by students at NCSSM.

Providing students with opportunities to apply their learning through research is not only vitally important in preparing and exciting students to pursue STEM degrees and careers after high school, but essential to encouraging innovative thinking that allows students to scientifically address major challenges and problems we face in the world today and will face in the future. Opened in 1980, NCSSM was the nation’s first public residential high school where students study a specialized curriculum emphasizing science and mathematics. Teaching students to do research and providing them with opportunities to conduct high-level research in biology, chemistry, physics, the applied sciences, math, and the social sciences is a critical component of NCSSM’s mission to educate academically talented students to become state, national and global leaders in science, technology, engineering and mathematics. NCSSM continues to expand real world opportunities for students through our research and mentorship programs. Over the past two years we have doubled the number of these opportunities and look forward to continuing to provide our students with the type of experiences that lead to the outstanding learning reflected in Broad Street Scientific.

The research showcased in this publication is an example of the significant research that students conduct each year at NCSSM under the direction of the outstanding faculty at our school and in collaboration with researchers at major universities. For twenty-seven years NCSSM has showcased student research through our annual Research Symposium each spring and at major research competitions such as the Siemens Competition in Math, Science and Technology, the Intel Science Talent Search, Toshiba Exploravision, and the International Science and Engineering Fair to name a few. The publication of Broad Street Scientific provides another opportunity to highlight the outstanding research being conducted by students each year at the North Carolina School of Science and Mathematics.

I would like to thank all of the students and faculty involved in producing Broad Street Scientific, particularly faculty sponsor Dr. Jonathan Bennett and senior editors Tejas Sundaresan and Halston Lim. Explore and Enjoy!

Sincerely,

A Photosynthetic City: Combining Nature with the Urban Environment

Emmanuel Assa was selected as the winner of the 2012-2013 Broad Street Scientific Essay Contest. His award included the opportunity to interview Dr. Lefkowitz as part of the Interview section of the journal.

Modern cities are not perfect. As they expand, they become centers for human civilization, but also centers for civilization’s greatest problems. As more and more of the human population moves into the city, the space we need increases, causing severe urban sprawl [1]. Urban sprawl creates the Urban Heat Island Effect, along with air and water pollution [1, 2]. The Urban Heat Island Effect, in turn, increases the cost of living, reduces a city’s comfort level, and can have detrimental impacts on a city’s surrounding environment [1]. To improve this situation, many researchers have proposed the added introduction of vegetation into the urban environment [2]. This practice will diminish pollution, reduce the Urban Heat Island, and even produce a profit for the city [2]. Integrating nature into the urban will produce better, cleaner, and more cost-effective cities.

For any city, pollution can be a major problem, especially in the air. Smog is a common phenomenon around large cities. Smog, a noxious combination of smoke and fog, contains chemicals such as sulfur dioxide and nitrogen oxides, key compounds in acid rain formation [2]. These chemicals are both harmful to the lungs and odorous, lowering a city’s residential appeal. Introducing plants into this environment would solve the problem almost immediately. Many types of vegetation absorb those harmful airborne chemicals, preventing them from forming acid rain [2]. These plants would effectively clean the surrounding air, lowering the chance of acid rain and improving the health of the city residents.

The Urban Heat Island is one of the most-studied effects of the current urban sprawl, and they key to the problem lies in albedo, the relative reflectivity of a material [3]. The higher an object’s albedo, the more light it reflects. Concrete and asphalt, the primary modern construction building materials, have very low albedo [3]. This means that during the day, the concrete and asphalt in cities absorb radiation from sunlight, and then release it at night as heat [3]. This creates a heat “bubble” around a city. Because of the increased temperature, air conditioning systems in every building inside of this “bubble” must consume more energy in order to maintain a comfortable interior temperature. More energy consumed means more energy bought from the power companies, which means a higher mainte-

nance cost [1]. This property of the Urban Heat Island is the most economically threatening [3]. The solution is still to introduce vegetation into the urban environment. The leaves of a plant have a much higher albedo than pavement or concrete, and plants release only a small amount of infrared radiation in the form of heat [2]. The more surface area of a city occupied by pants, the less severe the Urban Heat Island. Through this effect, planting trees and other vegetation can reduce the costs of building maintenance within a city.

The other economic benefit of integrating nature with the urban comes from the visual appeal of a city. A better-looking city can charge more for building space and property taxes, supplementing the city’s economy. Vegetation makes a city more pleasant to live in because it provides color against the normal grays and blacks of concrete and asphalt, and the shade it provides during the warmer months makes the city a more attractive place to live.

Green roofs are possible ways of integrating nature with the urban environment. A green roof is similar to a patch of vegetation that covers the top of a building. It increases the albedo of the building (lowering the Urban Heat Island Effect), and the vegetation can be drought-tolerant, minimizing the amount of water the owner would need to use to maintain it [4]. However, this limits the selection of plants available for the roof. Green roofs can be made to be aesthetic as well as functional; tropical or exotic vegetation can be added if the area is intended for recreation [4]. Precautions for aesthetic green roofs are that extra structural support is needed to hold up the extra weight, and exotic plants may require additional maintenance [4]. Both types of green roofs are capable of increasing a roof membrane’s longevity, improving a building’s sound insulation, reducing a building’s energy costs, and reducing the rainwater runoff [4]. Plain green roofs are cheap and easy to install, but aesthetic green roofs require planning in advance so that additional supports can be set up within the building shell. The typical cost of a green roof is only around $100 to $300 per square meter. Green roofs are an easy and efficient way to incorporate nature into its city surroundings. Many of the problems caused by urban sprawl can be either reduced or eradicated by introducing vegetation into the urban environment. All at once, it can improve

Emmanuel Assa

air quality, reduce the cost of living in an urban area, and create a beautiful cityscape. Implementing this concept on a city-wide scale also requires very little capital. By maintaining a careful balance between nature and human constructions, we can accommodate the increasing urban population with ease.

References

[1] Golden, J. S. (2004). The built environment induced urban heat island effect in rapidly urbanizing arid regions – a sustainable urban engineering complexity. Environmental Sciences, 1(4), 321-349

[2] Manning, W. J. (2008). Plants in urban ecosystems: Essential role of urban forests in urban metabolism and succession toward sustainability. International Journal of Sustainable Development and World Ecology, 15(4), 362370. http://search.proquest.com/docview/197928423?acc ountid=12723

[3] Hecht A., Fiksel J., Fulton S., Yosie T., Hawkins N., Leuenberger H., Golden J., & Lovejoy T. 2012. Rejoinder: Creating the future we want. Sustainability: Science, Practice, & Policy 8(2) Published online Apr 20, 2012. http:// www.google.com/archives/vol8iss2/1203-002.rejoinder. html

[4] Oberndorfer, E., Lundholm, J., Bass, B., Coffman, R. R.; et al. (2007). Green roofs as urban ecosystems: Ecological structures, functions, and services. Bioscience, 57(10), 823-833.

Effects of Climate Change on Agriculture

Matias Horst and Vivek Pisharody

Without a doubt, global climate change presents a serious threat to agricultural productivity. Current data indicate that immediate, directed action is necessary to protect world food security. Unfortunately, political conflicts regarding climate change have hindered the development of solutions to these issues. However, there are numerous innovative methods that have been proposed to combat the negative impacts of climate change on agriculture regardless of international unwillingness to address the problem itself.

Agriculture is perhaps the single human activity most closely tied to climate. However, evaluating the impact of global climate change on agriculture presents a difficulty in that while climate change occurs on the global scale, impacts on agriculture occur at the local level, with considerable variation between different regions. In their analysis, Kurukulasuriya and Rosenthal predict a modest net decrease in world agricultural output. Decreased yields in some regions will slightly outweigh productivity gains in other regions. However, the real threat to world food security arises not from this net decrease, but from the distribution of climate related effects on agriculture [1]. The true challenge of dealing with climate change’s effects on agriculture lies in tailoring unique solutions to specific regions and their respective climates.

In many areas, climate change has already reduced agricultural yields. As ocean temperatures rise, meltwater from mountain and Antarctic glaciers has caused an increase in sea level, threatening to engulf and destroy productive fields in low-lying areas. While climate change in some regions of the world may reduce yields through flooding, other regions are rapidly losing arable land because of severe drought. In the tropics and subtropics, rainfall levels are dropping, and droughts have increased in duration, decreasing crop yields [2]. In highland regions, frost damage due to increased CO2 concentrations has similarly impacted production.

However, in certain regions, agriculture productivity may actually rise. In high latitudes, lengthened growing seasons can augment agricultural productivity. Similarly, at high altitudes, higher temperatures may make more land suitable for farming [1]. Furthermore, increased concentrations of CO2 can make water use and photosynthesis more efficient.

The simplest method of adapting to changes in specific environments is modification of current farming techniques. In semi-arid regions, increasing rates of desertification have disrupted local ecosystems. Reduced rainfall, coupled with topsoil erosion due to wind, have reduced agricultural yields in the Middle East and sub-Saharan Africa [3]. A group of leading Israeli scientists has gen-

erated mathematical models to optimize vegetation coverage of sand dunes. Changing surface cover can modify local microclimates by affecting wind speed, surface humidity, and absorbed radiation levels. Techniques aimed at reducing grazing stress can halve the number of mobile dunes, decreasing exposed sand surface area and thereby facilitating local botanic agriculture and increasing local water levels. The study also revealed that, in areas where precipitation is sufficient, breaking up moss and bacteria layers on the soil can induce vegetation growth and cancel local desertification [3].

In temperate environments, heat and frost damage are major concerns. If plants mature at too early a time, they will be susceptible to damage from summer temperature peaks and associated dehydration. Furthermore, plants may yield crops earlier in the year as a result of heat stress. As spring temperatures rise, seedlings begin to emerge prior to the last frost. Increasing frost damage presents a serious challenge to agriculture in same extreme latitudes and higher altitudes in which climate change is expected to increase yields [4].

A method that has been useful in combating both of these issues is genetic engineering. The responses of plants to stress can be strengthened by amplifying the chemical signals between the chloroplasts or mitochondria, the organelles that most rapidly detect stress, and the nucleus. Scientists have discovered epigenetic procedures to artificially induce early crop yields as a means of adapting to shorter growing seasons. Gene splicing techniques using small fragments of RNA can also be used to influence flowering time; if flowering time is delayed, then most frost damage may be avoided. The transfer of genes from one species to another, transgenics, proves to be an opportunity to adapt the environmental strengths of some species to the conditions that other environments develop as a result of climate change. Heat, cold, and even salinity resistance can be provided by certain combinations of DNA [4].

A prominent example of a recombinant organism designed to combat frost damage is the frost-resistant strawberry grown throughout North Carolina. By inserting genes from the Winter Flounder, which produces anti-freeze compounds to survive in frigid waters, into the genes of a common strawberry cultivar, a strawberry highly resistant to frost was developed [5].

Over the past several decades, agricultural practices have become increasingly homogenous, while environments have become increasingly fractured and diversified. As traditional agricultural practices are overturned in favor of new methods and heirloom seeds are discarded in favor of a few high-yield varieties, there is a severe risk of losing

biodiversity. This potential loss of biodiversity represents a serious threat to future food security by constricting agriculture to a few popular, widespread species, an especially dangerous issue at a time when environmental stresses are diversifying. Additionally, lost biodiversity can reduce the potential of genetic engineering by reducing the availability of genes for transfer. In response, numerous seed banks exist throughout the world, the most prominent of which is in Svalbard, Norway and contains 775,000 samples from 231 countries stored at -18ºC [6].

Solving the issue of anthropogenic climate change by addressing the root cause – CO2 emissions – has been hindered by challenging economic and political issues outside the scope of science. Despite these challenges, it is possible to face the problems caused by climate change through innovative scientific solutions. The impacts of climate change are diverse, and range from devastating to beneficial. By addressing these issues within the context of their local environments, scientists can mitigate problems and take advantage of new opportunities created by different environmental conditions.

References

[1] Kurukulasuriya, P., & Rosenthal, S. (2003). Climate Change and Agriculture.Firsov, A. P., & Dolgov, S. V. (1998).

[2] World Meteorological Organization. (n.d.). Climate change and desertification.

[3] Kinast, S., Meron, E., Yizhaq, H., & Ashkenazy, Y. (2012). Biogenic crust dynamics on sand dunes. Biological Physics; Geophysics.

[4] Mittler, R., & Blumwald, E. (2010). Genetic engineering for modern agriculture: challenges and perspectives. Annual review of plant biology, 61, 443–62.

[5] Agrobacterial Transformation and Transfer of the Antifreeze Protein Gene Of Winter Flounder to the Strawberry.

[6] Food, M. of A. and. (2007, April 3). Svalbard Global Seed Vault. regjeringen. no.

A Novel Design of Electrode Surface Morphology to Improve Water Electrolysis

Efficiency

ABSTRACT

A new surface morphology was proposed in this study to optimize the efficiency of water electrolysis. Past studies have shown that reducing particle size is less efficacious in improving electrolysis efficiency than modifying surface morphology. Using Ni metal and a specified pattern thickness, along with a novel film pattern size, the design proposed in this study has ~13.4% more effective surface area than a simple pattern with straight side walls. To realize the proposed surface morphology, photoresist patterned Ni electroplating was used. The surface morphology of the photoresist and resulting plated Ni film were confirmed by a scanning electron microscope (SEM). To improve the accuracy of the measurement, the Kelvin probe method was used with a specially designed sample holder to reduce the effect of contact resistivity and external resistance of the system. For Ni electrode test, Ag/ AgCl in 4 M KCl solution was used as a reference electrode and Pt was used as counter electrode. For quantitative analysis of the surface area effect, sputtered Ni film was tested with Teflon tape as a masking material to define the active area of the film. The test system was observed to accurately detect the effect of bubble accumulation on the film surface with a narrow trench like opening. The Voltammogram was analyzed using a modified Butler-Volmer equation with series resistance. A data analysis program was written to find resistance, rs (Ω), exchange current density, J0 (Amps/cm2), and the charge transfer coefficient, α. This new analysis method was compared to a conventional method from literature in order to ensure validity. The results showed that, using the proposed surface morphology modification, the series resistance decreased 20.4% from its “expected” value – which then translates into a 25.6% increase in efficiency at a given bias voltage.

Introduction

Today, the majority of the world runs on a hydrocarbonbased fuel supply. The source of this energy, however, is in fossil fuels, energy rich hydrocarbons that lie dormant under select regions of the Earth. Though energy efficiency is high for fossil fuels, the carbon emissions can cause many environmental concerns. In addition, sustainable forms of energy are currently not cost competitive against fossil fuels: data from the Henry Hub natural gas distribution system versus data from the Norwegian University of Science and Technology (NTNU) in 2006 confirms the relatively high cost efficiency of natural gas, as the cost per million BTU of natural gas in December was around $6.734/Mil. BTU, and NTNU reported the cost per kilowatt hour to produce electrolytic hydrogen at $0.10/kWh (or $29.31/ Mil. BTU) [1]. Therefore, research in improving the efficiency of sustainable energy production, such as water electrolysis, is of critical importance.

There are two methods to improve the efficiency of water electrolysis: employ a metal with greater catalytic properties or increase the effective surface area of the electrode. Research on the catalytic properties of non-noble metals in water electrolysis has been undertaken for over two centuries in search of cheaper metals with greater catalytic effects. Non-noble metals observed include cobalt, manganese, nickel, iron, copper, chromium, vanadium, and their alloys and oxides [2-9].

More recently, different electrode surface modifying techniques have been designed and tested to improve the hydrogen generation efficiency of water electrolysis. These

techniques include building nanowire arrays, nanoparticles, nanocrystals, nanotubes, nanoholes, structures with micropores, and three-dimensional dendrite formation structures using high current electroplating methods [1015]. However, more recent advances in surface morphology modification focus on nanowire and nanotube growth because the effective surface area of an electrode greatly increases with increasing aspect ratio between the height of the structure and its cross sectional area. However, those earlier publications focus mostly on fabrication methods for unique nanostructures rather than on a rigorous analysis on the efficiency of the electrolysis or on a quantitative analysis of the impact of the aspect ratio to the surface area [10-15].

This paper will quantitatively analyze the effective surface area as a function of aspect ratio. Based on this calculation, it was found that the aspect ratio had greater impact to the efficiency than individual particle sizes. The primary focus of this research is to find a simple way to increase the surface area even further with a given aspect ratio. A new electrode surface morphology involving curved sidewalls to make a structure with a protruding top, or mushroom top, was designed and tested, which further increased the effective surface area of an electrode compared to the straight side wall structure when the aspect ratio is same. In particular, the method used to produce the structure was economically viable, using existing technologies such as photoresist photolithography and electroplating. This apparently simple and easy method to produce electrodes for more effective water electrolysis has not

been tried before, based on an extensive literature search undertaken through an online library covering over 500 published materials for the last 60 years.

Background Theory

Cyclic voltammetry was used to collect the current densities of an electrolysis system consisting of cathode and anode electrodes over a range of voltages with a defined rate of voltage change per unit time. The current voltage characteristics obtained from cyclic voltammetry are called voltammograms.

To analyze the curve of the voltammograms, a Butler Volmer equation with an Ohmic limiting resistance component was used [44].

According to the Nernst equation, the standard electrode potential of the cathode (Ec0) and anode (Ea0) can be expressed as a function of pH of a solution and the partial pressure of oxygen (p02) and hydrogen (pH2) as shown in equations (1) and (2) [44].

The atmospheric partial pressure of 0.2095 atm for oxygen and 5×10−5 atm for hydrogen were used to calculate standard electrode potential [44].

At room temperature, pH can be calculated using the Sorensen equation [44], in which [OH-] is the concentration of hydroxide, OH-, in mole/liter.

Table 1 shows the calculated values of pH, Ec0, and Ea0 for KOH concentration used in this experiment.

Table 1. Calculated values of pH, Ec0 and Ea0 at room temperature as a function of KOH concentration

The voltammograms from water electrolysis were analyzed using the Butler Volmer equation and Ohm’s law [44].

At high E–Ec0, equation (4) becomes Ohmic term dominant and at low E–Ec0, it becomes Butler Volmer term dominant. Therefore, the data can be analyzed separately at 2 different regions.

Table 2. The definitions of the symbols used in equation 4a and 4b

The inconsistency in the aforementioned modified Butler Volmer fit is that it regards the resistive and Butler Volmer term dominant portions as independent identities, whereas in reality, the two factors are related. To address this inconsistency, a new modified Butler Volmer fit was proposed. If the resistance is assumed to be rc, then the voltage drop caused by rc is Ic x rc, where Ic is the total current in the system.

This must be subtracted from E to find the true potential. Therefore, the equation 4(a) can be expressed as below without separating it in 2 voltage regions.

Then, to isolate the E – Ec0 term on one side, the exponent must be removed.

*These equations can substitute Ja for Jc, 1–αa for αc, and Ea0 for Ec0 to produce the formula related to the anode.

With this final equation, it was possible to fit the voltammogram curve using one fit curve. To find the desired rc, αc, and Jc0 values, Visual Basic (VB) programming was used, employing a search algorithm to minimize the discrepancy between the measured data and the calculated data from equation 5 within the range of parameters.

Novel Design in Surface Morphology

Many international energy researchers have attempted to make electrode surfaces using nanomaterials. To know the impact of those nanoparticles to surface area, it is necessary to calculate the surface area as a function of particle size. The equation to find the surface area of the electrode for a square pattern given the distance between two squares (a) and the length of one side of the square (d) with a height (h) is stated thus:

If the aspect ratio is given, the shape of the sidewall can further increase the effective surface area. The underlying goal of this research, then, was to synthesize mushroomtop shaped structures atop electrodes. Figure 3 shows the impact of the sidewall shape to the effective surface area. It is apparent that the mushroom top shape has the greatest impact. This structure can be realized with a relatively simple and economically viable method using photoresist patterning and electroplating.

In figure 1, the area in equation (8) is plotted with the height (h) is set to be the smaller of the dimensions d and a, which describes the resolution of the patterning technique, and, therefore, the maximum height that can be attained for the structure. By this definition, aspect ratio is kept constant, i.e. as pattern size decreases, thickness decreases, which is characteristic of electrode surfaces made using nanoparticles with various particle sizes. Surprisingly, there is no additional advantage in reducing pattern resolution, which is relevant to the particle size. However, if the height of the pattern was constant and lengths of the sides of the structures were reduced (essentially increasing aspect ratio) the surface area increase becomes significantly greater, as shown in figure 2. As shown, it is apparent that within a given thickness, smaller and closer-spaced patterns yield greater effective surface areas.

fixed aspect ratio as function of length and spacing of square patterns. Aspect ratio assumed to be 2.

Figure 3. Schematic diagram of photo resist profile and plated film. (a) Pyramid shape structure. Additional area is A-B per side (b) Mushroom top shape structure. Additional area is A+B per side (c) Standard square structure. Additional area is A per side.

Figure 4 shows how much additional area can be made by having mushroom top surface morphology. The calculation was made based on assumptions of a film thickness of 6 um, a side wall angle of 45 degrees, and a spacing between square patterns of 75 um. With these specifications, the mushroom top surface morphology is shown to produce 13.4% more surface area compared to a straight sidewall with an aspect ratio of 0.06.

Figure 4. The calculated additional surface area from square patterns with mushroom top with 45 degree angle and straight sidewall. The thickness of the film wasassumed to be 6 um. The spacing between the patterns was fixed at 75 um.

Figure 1. Surface area with
Figure 2. Surface area with constant thickness as a function of length and spacing between square patterns. Thickness was set at 100 um.

Experimental Design

An alkaline solution of potassium hydroxide (KOH) was used as the electrolyte in the experiment. Platinum was used as a counter electrode and Pt or Ag/AgCl in 4 M KCl were used as reference electrodes, Pt for its high resistance to corrosion, and Ag/AgCl in 4 M KCl for its stable standard electrode potential under various electrolyte concentrations and temperature of the system. Keithley current and voltage measurement devices and HP current/ voltage power supply were connected to the computer via GPIB connection, and the instruments were controlled remotely using VEE (Virtual Engineering Environment) graphic user interface programming language. The electrochemical measurement was cyclic voltammetry, where the independent variables were current or voltage bias and scan speed, and the dependent variables were system voltage for current bias and system current for voltage bias of each half cell.

Electroplating System

Two types of plating tested were electroless plating and electroplating. The plating solutions were bought from Caswell Plating Inc. and were operated under NCSSM’s lab hood to maintain air circulation. The metal film used as a seed layer was sputter coated Ni film, on 100 mm diameter glass wafers, with a thickness of 80 nm.

Electroplating has two advantages over electroless plating. With electroplating, it becomes simpler to control the film thickness because it can be accurately controlled by current density and plating time. Also the electroplating is done at a much lower temperature and is much safer for the photoresist. Therefore, electroplating was selected as the film deposition method for the experiment, after both methods were tested. Table 3 shows characteristics of electro- and electroless plating.

Table 3. Comparison between Electroplating and Electroless Plating in certain categories.

A Ni metal sheet was used as the anode, and a mount was made to hold the anode and Ni wafer in place in the solution, as shown in figure 5. Heat-shrink tube covered springs were used to keep the Ni wafers to be plated stable, and the anode was mounted onto the acrylic base plate using screws.

Figure 5. (a) Electroplating set up w/Keithley 220 power supply, multimeter (voltage measurement), hot plate and thermometer. Wafer holder/anode is visible inside plating solution. (b) Ni sheet metal for anode and Ni film on a glass wafer for plating.

Several trials were required to find an optimal electroplating condition. The final procedure involved 1 cm by 3 cm Ni metal slices which were plated for 30 minutes at 43.3° C with 14 mA of current bias (from the Caswell plating manual) for a plate thickness of 2 um. Figure 6 shows the SEM image of the plated Ni films plated at this condition.

Figure 6. SEM image of ~2 um plate thickness with 14 mA applied current for 30 minute plate time.

Photoresist Patterning Procedure

Photoresists are nonconductive, photosensitive polymer films that can be developed to create patterns on a surface. Typical photoresists are spin-coated onto surfaces and come in two variations: negative films and positive films. Negative photoresist patterns develop the opposite of the mask that covers it during light exposure; essentially, if there is a dark location on the pattern, the photoresist underneath it will be washed away by the developer solution during developing [45].

Depending on the photoresist material, the contact method during UV exposure, the developing time and the heat treatment condition, the sidewalls of patterned photoresists could generate any of the three types shown earlier in figure 3. While electroplating, then, Ni metal would deposit onto the film on conductive surfaces in contact with the solution. The Ni film naturally forms to the morphology of the patterned photoresist.

The photo mask with 100 – 700 um square patterns were designed using CAD software and laser printed on a acrylic sheet by CAD/Art Services, Inc., a photoresist mask printing service provider.

AZ2070 is a negative photoresist that comes in a liquid form. It was spin coated and heat-treated to solidify for exposure and developing. After the photoresist was poured, the wafer was spun at 2000 rpm for 30 seconds on a vacuum chuck, then baked at 100° C for 1 minute to achieve a photoresist thickness of 9 um. For AZ2070, there were three parts to the developing process: exposure to 350-400 nm peak UV light with a patterned mask, heat treatment, then developing with MIF300 to remove the unexposed photoresist and reveal the design. After multiple trials, an optimal developing condition for a positively sloped sidewall was found: 2.5 minute UV exposure, 1.5 minute heat treatment at 120° C, then 1.5 minute developing in MIF300. A cross sectional image of the photoresist on the Ni film and the resulting electroplated Ni film is shown in figure 7.

The additional area from the sidewall shape of the plated Ni films shown in SEM cross section was calculated as ~20.1 um per unit side length with 5.1 um thickness as shown in figure 8. After Ni electroplating with the desired conditions, the photoresist was stripped in acetone. Figure 9 shows optical microscope images of plated Ni films before and after the photoresist was stripped after plating.

Figure 9. Optical microscope picture of electroplated Ni films. (a) 300 um x 300 um pattern with 300 um spacing with photo resist. (b) 500 um x 500 um pattern with 300 um spacing with photoresist. (c) 500 um x 500 um pattern with 300 um spacing after photoresist stripped in acetone bath.

Figure 10. (a) Voltammogram test set up. HP6632B power supply, Keithley 617, Keithley 192 multimeter and Electrodes are mounted on an acrylic board. (b) Electrode holder.

Figure 7. Cross sectional image of (a) photoresist on Ni coated glass substrate with a curved sidewall and (b) electroplated Ni film showing the curved sidewall of the mushroom top.

Figure 8. Calculated additional surface area from overhang structure based on photo resist morphology.

Finding the Optimum KOH Concentration

Figure 11. Voltammogram of water electrolysis with Pt anode, cathode and reference electrode in 0.5, 1.0, 1.5, 2.0 M KOH solutions.

Table 4. Extracted parameters with 0.5, 1.0, 1.5 and 2 M KOH solutions.

Figure 12. Example voltammograms of water electrolysis with Pt electrodes at 20° C for (a) a cathode with 0.5M KOH and (b) an anode with 0.5M KOH. Blue diamonds-measured data, red lines-Ohmic fit, green lines-Butler Volmer fit.

To find the optimum KOH concentration, 0.5, 1.0, 1.5 and 2 M KOH solutions were tested with Pt as both the anode and cathode. 1 M KOH solution was found to be the optimal solution with a high conductivity using the minimal amount of KOH. The data is shown in figure 11.

Figure 12 shows example graphs with measured voltammograms with fitted curves by Butler Volmer and Ohmic equations. This is the traditional way of fitting the voltammogram in 2 separate voltage regions as described in theory section. It is clear the two fit curves show larger discrepancy as they approach the middle region.

Table 4 shows the parameters calculated from the analysis using equation (4). From the data in Table 4, it is clear that the limiting resistance decreases as the KOH concentration increases.

Figure 13 shows resistance of cathode and anode as a function of KOH concentration. Though the resistance continues to decrease as concentration increases, the change slows down at 1 M KOH and appears to remain similar. Because of this behavior, 1 M KOH solutions were selected for the rest of the experiment.

Figure 13. Resistance of Pt anode and cathode as a function of KOH concentration

Catalytic Effect of Pt versus Ni Electrodes

The dependence of electrolysis systems on different metals and their inherent catalytic effects, by comparing different standard electrode potentials, could be seen by analyzing the voltammograms from a Pt/Pt system versus a Ni/Ni system, as shown in figure 14. The Pt film had 2 layers, 25 nm thick Ti adhesion layer at the bottom and 50 nm of Pt on the top, evaporated by an e-beam on a 100 mm diameter glass substrate. The Ni film was 80 nm thick, deposited with a sputtering system. For the cathode, the curves in figure 14 look similar; however, the turn-on voltage at the anode has a dramatic difference. With Ni metal, the turn-on voltage became much lower than Pt, which indicated better catalytic effect. For wafer electrolysis, Ni is clearly better than Pt not only because it is cheaper, but also because it has a more pronounced catalytic effect.

Figure 14. Voltammogram of water electrolysis with Pt film (Ti/Pt=25/50 nm thick) and Ni film (80 nm thick).

Electrode Mounting Method

To make it sure to have only the electrode material of interest is in contact with the solution, the electrical connection has to be made outside of the solution. Because of this configuration, there was always an extra voltage drop from the film between the surface of the electrode inside the solution and the place where the contact was made, even though the Kelvin probe method was used to remove the voltage drop from the cables [46,47].

After a few iterations, the final design for sample mounting for electrolysis was developed, as shown in figure 15. The metal wafers were cut into 1 cm by 3~4 cm slices. Teflon tape was used to define the active area and to separate the active area from contact wire. A small piece of aluminum foil was placed underneath the Teflon to reduce the voltage drop between the active area of electrolysis and the electrical contact. For each sample, 2-electrode contacts were made to make a Kelvin probe configuration, one for power supply and one for voltage sense to minimize the effect of resistance from the wiring.

keep the pins from breaking the sample underneath. For Pt/Ni electrode system, to make more accurate measurement, Ag/AgCl in a 4 M KCl solution was used as reference electrode. [48] The advantage of using Ag/AgCl as a reference electrode was its stabile standard electrode potential. At room temperature its standard electrode potential is known to be at 0.2 V compared to the Pt/H+ standard electrode, and has been found to be stable for various temperatures as well [49].

With the final design of the electrode mounting scheme, the resolution of the voltammetry setup was confirmed to be high enough to even detect bubble formation during electrolysis. With very narrow electrode opening in working electrode, dense gas formation is expected. At high current, the gas will form big size bubbles and it can block the part of electrode surface. As the electrode is partially covered by a bubble, the resistance of the system increases and it will be detected as current decrease if the system is sensitive enough. Figure 16 shows the voltammogram with Ni film with 0.087 cm2 opening. At high current region, the current oscillation is visible and it correlates to the period of bubble formation. Figure 17 shows a typical picture of the bubble when it is the biggest and right after it is removed from the surface because of its buoyancy.

15. Schematic

The contact was made using gold-plated pins soldered to a wire on a circuit board connected to the acrylic back plate with screws with springs around them to apply force against the screws to preserve the lifetime of the pins and

Figure 16. The voltammogram of Ni electrode with 0.087 cm2 opening.

Figure 17. The pictures of Ni electrode with 0.087 cm2 opening when (a) the bubble is biggest so a part of the surface is covered and (b) the bubble is floated up because of its buoyancy.

Figure
diagram of sample loading and electrical connection set up for water electrolysis for Ni films.

Impact of Patterned Plated Ni Electrode with Mushroom Top Morphology

To find the impact on electrolysis efficiency of the new surface morphology, the data from the sputtered Ni films with 2 different opening areas have been tested, as well as 3 different designs of patterned, plated Ni films with mushroom top surface morphologies. To keep the plated Ni film thickness constant for all 3 samples, the plating was done on a half of 100 mm diameter wafer with 3 different photo resist patterns. The wafer was cut after the plating is done using diamond saw.

Also to improve the accuracy of the data analysis, a modified Butler Volmer equation (equation 5) was used. Figure 18 shows an example of the analysis on sputtered Ni film with 0.87 cm2 opening with both the conventional method (equation 4) and the new method developed in this research (equation 5). To use equation (5), a Visual Basic program was made to find the parameters with the minimum square error between the measured data and the model. From the graph, it was clear that the new analysis method was more accurate, not only in the medium voltage region but also in the low voltage region.

Figure 19 shows the current voltage characteristics of 5 samples tested, sputtered Ni films with 0.087 cm2 and 0.87 cm2 opening area and 3 electrodes with patterned plated Ni with mushroom top surface morphology. The pattern structures are 300x300 um2 with 100 um spacing, 300x300 um2 with 300 um spacing and 500x500 um2 with 100 um spacing. Since the data from a 0.087 cm2 opening was oscillating in the high current region, only the maximum point of each oscillation period was taken for analysis because those data points represent when there is no bubble covering the electrode surface.

Figure 18. Comparison between experimental data (diamonds), original mod-BV fit (red –low voltage region), Ohmic fit (green – high voltage region), and new modBV fit (blue circles) using the search algorithm developed in Visual Basic.

From figure 19, it is clear that there is lot more current flowing at a given voltage if the opening area is larger. The plated films with mushroom top surface morphology shows largest current but more rigorous Butler Volmer analysis is necessary to see exactly how much impact was made by this surface modification.

Figure 19. Current Voltage characteristics of 5 different Ni film electrodes. ▬ sputtered Ni film with 0.087 cm2 opening. It shows oscillating current as a result of bubble accumulation. ▬ maximum data points from each oscillation period from sputtered Ni film with 0.087 cm2 opening. ▬ sputtered Ni film with 0.87 cm2 opening. And patterned plated Ni film with mushroom top morphology with the patterns of 500x500 um2 with 100 um spacing(▬), 300x300 um2 with 100 um spacing(▬) and 300x300 um2 with 300 um spacing(▬).

The highest current was expected from 300x300 um2 with 100 um spacing. However, appeared that 500x500 um2 with 100 um spacing showed the highest current. It is still under investigation but it seems that the patterns generated were more rounded rather than square, and this distortion becomes more pronounced as the pattern size and spacing decreased due to low resolution of the photolithography setup used in this experiment.

Figure 20 shows the fit parameters from the new model (equation 5) found with the aforementioned VB program. With the new fit method, three graphs were generated, one for each critical parameter as a function of surface area for the anode and cathode. In figure 20a and 20d, there is a clear trend of resistance reduction as area increases. With a linear fit of sputtered Ni films with 2 different opening areas, it is noticeable that the resistances of patterned plated samples show lower resistance than expected from the linear extrapolation. The resistance plot as a function of area with effective surface area of mushroom top morphology (green triangles in figure 20a and 20d) fits very well with this linear trend.

Figure 20. (a, b, c for anode; d, e, f for cathode) Resistance, Exchange Current Density (Jc0, Ja0), and Charge Transfer Coefficients (1-αa, αc) compared to changes in Surface Area. Blue diamond represents data from the sputtered Ni films, while the red squares are from the patterned plated samples without consideration of additional surface area of the plated Ni patterns. For resistance plots (a and d), the same resistance data was plotted as a function of surface area with consideration of the sidewall shape of the plated Ni film as green triangles.

Conclusion

This research demonstrated the impact of novel electrode surface design with mushroom top morphology to increase the effective surface area by using simple photolithography and electroplating. This design can be applied to other patterning methods with various aspect ratios. With cyclic voltammetry, these metal films were tested to gather voltammograms. Using a conventional and newly proposed modified Butler Volmer equation with limiting resistance, critical parameters were extracted and analyzed in relation to the change in surface area. The resistance

of the system decreased below the expected resistance from the linear extrapolation of the sputtered Ni films with known opening areas. From SEM pictures of the mushroom top patterns, the estimated advantage of the structure was about 4 times more than straight sidewall. With linear approximation, the average reduction in resistance was 20.4% more than what is expected from linear approximation, which translates into a 25.6% increase in efficiency.

Future Works

In the future, the experiment should be done to find the trend of electrolysis efficiency as a function of pattern size and spacing. To do so, photolithography techniques with higher controllability is required. Also it is desirable to use smaller pattern size to increase additional surface area from the plated metal sidewall so the effective surface area of this new structure can be further increased. As investigated in this study, higher aspect ratios show a greater effective surface area increase. If we apply this mushroom top surface morphology structures in high aspect ratio materials such as nanowires and nanotubes, it will further increase the efficiency. Another focus of future works can be on other nonnoble metals that may provide greater catalytic effect than Ni, including cobalt or manganese, which have been previously tested and characterized in electrolytic systems to be suitable replacements for noble metals. With consideration to the future, high efficiency electrolysis will very likely become one of the greatest sources of fuel for the future.

Acknowledgements

I’d like to acknowledge my mentor, Dr. Lee, for reading materials and lab assistance he provided me throughout the project, as well as the motivation he had to try a new idea out of his field. Also, my mother and sisters who cheered me on before going to bed, whose spirits kept me going till 4. Special thanks to the NCSSM Research in Chemistry program and Dr. Halpin for providing the bulk of the funding for my research and the resources to have my materials accepted and presented at many conventions throughout the year.

References

[1] Henry Hub Gulf Coast Natural Gas Spot Price, http:// www.eia.gov/dnav/ng/hist/rngwhhdm.htm, last visited 09/27/2012.

[2] M. I. Godinho, M. A. Catarino, M. I. da S. Pereira, M. H. Mendonca, and F. M. Costa. Effect of partial replacement of Fe by Ni and/or mn on the electrocatalytic activity for oxygen evolution of the CoFe2O4 spinel oxide electrode. Electrochemica Acta, 47:4307-4314, 2002.

[3] C. C. Hu and Y. R. Wu, “Bipolar performance of the electroplated iron-Ni deposits for water electrolysis.”, Materials Chemistry and Physics, 82, pp588-596, 2003.

[4] J. Ponce, J. L. Rehspringer, G Poillerat, and J. L. Gautier,” Electrochemical study of Ni-aluminum-manganese spinel NixAl1-xMn2O4. Electrocatalytical properties for the oxygen evolution reaction and oxygen reduction reaction in alkaline media”, Electro-chemica Act, 46:3373-3380, 2001.

[5] Chebotareva, Natalia and Nyokong, Tebello. “First-row transition metal phthalocyanines as catalysts for water electrolysis: a comparative study” Electrochimica Acta, 42:3519 – 3524, 1997.

[6] V. Rashkova, S Kitova, I. Konstantinov, and T. Vitanov.” Vacuum evaporated thin films of mixed cobalt and Ni oxides as electrocatalysts for oxygen evolution and reduction.”Electrochemica Act, 47:1555-1560, 2002.

[7] R. N. Singh, N. K. Singh, and J. P. Singh. “Electrocatalytic properties of new active ternary ferrite lm anodes for O2 evolution in alkaline medium”. Electrochemica Acta, 47, 3873-3879, 2002.

[8] F. I. Mattos-Costa, P. de Lima-Neto, “S. A. S. Machado, and L. A. Avaca. Characterization of surfaces modified by sol-gel derived RuxIr1- xO2 coatings for oxygen evolution in acidic medium.” Electrochemica Acta, 44:1515{1523, 1998}.

[9] G. L. Elizarova, G. M. Zhidomirov, and V. N. Parmon. Hydroxides of transition metals as artificial catalysts for oxidation of water to dioxygen. Catalysis Today, 58:71{88, 2000.

[10] C.-T. Hsieh, W.-Y. Chen, I.-L. Chen, and A. K. Roy, “Deposition and activity stability of Pt–Co catalysts on carbon nanotube-based electrodes prepared by microwaveassisted synthesis,” Journal of Power Sources, vol. 199, pp. 94–102, Feb. 2012.

[11] Z. Hu, D. M. Zhou, R. Greenberg, and T. Thundat, “Nanopowder molding method for creating implantable high-aspect-ratio electrodes on thin flexible substrates.,” Biomaterials, vol. 27, no. 9, pp. 2009–17, Mar. 2006.

[12] C.-J. Huang, P.-H. Chiu, Y.-H. Wang, W.-R. Chen, T.H. Meen, and C.-F. Yang, “Preparation and characterization of gold nanodumbbells,” Nanotechnology, vol. 17, no. 21, pp. 5355–5362, Nov. 2006.

[13] Y. Lei, W. Cai, and G. Wilde, “Highly ordered nanostructures with tunable size, shape and properties: A new way to surface nano-patterning using ultra-thin alumina masks,” Progress in Materials Science, vol. 52, no. 4, pp. 465–539, May 2007.

[14] S.-C. Lin, Y.-F. Chiu, P.-W. Wu, Y.-F. Hsieh, and C.-Y. Wu, “Templated fabrication of nanostructured Ni brush for hydrogen evolution reaction,” Journal of Materials Research, vol. 25, no. 10, pp. 2001–2007, Jan. 2011.

[15] C. Microanalytics, L. Maltings, and P. Row, “Nanoelectrodes , nanoelectrode arrays and their applications,” pp. 1157–1165, 2004.

[16] T. N. Nanowires and S. H. Magnetization, “Nanotubes to Electrodes.”

[17] L. F. Petrik, Z. G. Godongwana, and E. I. Iwuoha, “Platinum nanophase electro catalysts and composite electrodes for hydrogen production,” Journal of Power Sources, vol. 185, no. 2, pp. 838–845, Dec. 2008.

[18] M.-S. Wu and P.-C. J. Chiang, “Electrochemically deposited nanowires of manganese oxide as an anode material for lithium-ion batteries,” Electrochemistry Communications, vol. 8, no. 3, pp. 383–388, Mar. 2006.

[19] Ghosh, S.K; Grover, A.K; Dey, G.K;Totlani, M.K, “Nanocrystalline Ni–Cu alloy plating by pulse electrolysis” Surface & Coatings Technology (0257-8972), 2000, Volume 126, Issue 1, pp. 48 - 63.

[20] Ranganathan, David; Zamponi, Silvia;Berrettoni, Mario; Layla Mehdi, B; Cox, James A; “Oxidation and flow-injection amperometric determination of 5-hydroxytryptophan at an electrode modified by electrochemically assisted deposition of a sol-gel film with templated nanoscale pores” Talanta (0039-9140), 09/2010, Volume82, Issue 4, pp. 1149 - 1155.

[21] Shibli, S M. A and Dilimon, V S. “Development of nano IrO 2 composite-reinforced nickel–phosphorous electrodes for hydrogen evolution reaction” Journal of Solid State Electrochemistry(1432-8488), 08/2007, Volume 11, Issue8, pp. 1119 - 1126.

[22] Gao, Feng; Yang, Yifu; Liu, Jun; Shao, Huixia. “Method for preparing a novel type of Pt–carbon fiber disk ultramicroelectrode” Ionics (0947-7047), 02/2010, Volume 16,Issue 1, pp. 45 - 50.

[23] Brown, I.J and Sotiropoulos, S. “Preparation and characterization of microporous Ni coatings as hydrogen evolving cathodes” Journal of Applied Electrochemistry(0021-891X), 01/2000, Volume 30, Issue1, pp. 107 - 111.

[24] Sanchez, Pablo Lozano and Elliott, Joanne M. “Underpotential deposition and anodic stripping voltammetry at mesoporous microelectrodes” The Analyst (0003-2654), 05/2005, Volume 130, Issue 5, p. 715.

[25] Łosiewicz, Bożena. “Experimental design in the electrodeposition process of porous composite Ni–P+TiO2 coatings” Materials Chemistry and Physics (0254-0584), 08/2011, Volume 128, Issue 3, pp. 442 - 448.

[26] Nikolić, Nebojša D; Branković, Goran;Popov, Konstantin I. “Optimization of electrolytic process of formation of open and porous copper electrodes by the pulsating current (PC) regime” Materials Chemistry and Physics (0254-0584), 2011, Volume 125, Issue 3, pp. 587 - 594.

[27] Chen, Shun-Tong and Luo, Tsu-Sheng. “Fabrication of micro-hole arrays using precision filled wax metal deposition”

Journal of Materials Processing Tech(0924-0136), 2010, Volume 210, Issue 3, pp. 504 - 509.

[28] Tang, Shaochun; Tang, Yuefeng; Gao, Feng; Liu, Zhiguo; Meng, Xiangkang. “Ultrasonic electrodeposition of silver nanoparticles on dielectric silica spheres” Nanotechnology (0957-4484), 07/2007,Volume 18, Issue 29, p. 295607.

[29] Brown, I.J; Clift, D; Sotiropoulos, S. “Preparation of microporous nickel electrodeposits using a polymer matrix” Materials Research Bulletin (0025-5408), 1999, Volume 34, Issue 7, pp. 1055 - 1064.

[30] Sotiropoulos, S; Brown, I.J; Akay, G;Lester, E. “Nickel incorporation into a hollow fibre microporous polymer: a preparation route for novel high surface area nickel structures” Materials Letters (0167-577X), 1998,Volume 35, Issue 5, pp. 383 - 391.

[31] El-Sherik, A.M; Erb, U; Page, J. “Microstructural evolution in pulse plated nickel electrodeposits” Surface & Coatings Technology (0257-8972), 1997, Volume 88, Issue 1, pp. 70 - 78.

[32] Mukai, Kohki; Kitayama, Shinya;Kawajiri, Yasunobu; Maruo, Shoji. “Micromolding for three-dimensional metal microstructures using stereolithography of photopolymerized resin” Microelectronic Engineering (0167-9317), 2009, Volume 86, Issue 4, pp. 1169 - 1172.

[33] Walsh, F.C; Ponce de León, C; Kerr, C; Court, S; Barker, B.D. “Electrochemical characterisation of the porosity and corrosion resistance of electrochemically deposited metal coatings” Surface & Coatings Technology (0257-8972), 2008, Volume 202, Issue 21, pp. 5092 - 5102.

[34] Wang, Jian; Wei, Liangming; Zhang, Liying; Zhang, Yafei; Jiang, Chuanhai. “Electrolytic approach towards the controllable synthesis of symmetric, hierarchical, and highly ordered nickel dendritic crystals” CrystEngComm (14668033), 02/2012,Volume 14, Issue 5, pp. 1629 - 1636

[35] Sode, A; Ingle, N.J.C; McCormick, M;Bizzotto, D; Gyenge, E; Ye, et. al. “Controlling the deposition of Pt nanoparticles within the surface region of Nafion” Journal of Membrane Science (0376-7388), 2011, Volume 376, Issue 1, pp. 162 - 169.

[36] Mohanty, U S. “Electrodeposition: a versatile and inexpensive tool for the synthesis of nanoparticles, nanorods, nanowires, and nanoclusters of metals” Journal of Applied Electrochemistry(0021-891X), 03/2011, Volume 41, Issue3, pp. 257 - 270.

[37] Domínguez-Crespo, M.A; Ramírez-Meneses, E; Torres-Huerta, A.M; Garibay-Febles, V; Philippot, K. “Kinetics of hydrogen evolution reaction on stabilized Ni, Pt and Ni–Pt nanoparticles obtained by an organometallic approach” International Journal of Hydrogen Energy(0360-3199), 03/2012, Volume 37, Issue6, pp. 4798 - 4811.

[38] R.K. Shervedani and A. Lasia. “Studies of the hydrogen evolution reaction on Ni–P electrodes” J. Electrochem. Soc. 144, 511 (1997).

[39] D.R. Kim, K.W. Cho, Y.I. Choi, and C.J. Park. “Fabrication of porous Co–Ni–P catalysts by electrodeposition and

their catalytic characteristics for the generation of hydrogen from an alkaline NaBH4 solution.” Int. J. Hydrogen Energy 34, 2622 (2009).

[40] S.I. Tanaka, N. Hirose, and T. Tanaki. “Evaluation of raney-nickel cathodes prepared with aluminum powder and titanium hydride powder” J. Electrochem. Soc. 146, 2477 (1999).

[41] Seonyul Kim, Nikhil Koratkar, Tansel Karabacak, and Tob-bi-Ming Lu, Applied Physics Letters, 26, 263106, 2006.

[42] Ibrahim M. Sadiek, Ahmad M. Mohammad, Mohamed E. El-Shakre, M. Ismail Awad, Mohamed S. El-Deab, and Bahgat E. El-Anadouli, “ Electrocatalytic Evolution of Oxygen Gas at Cobalt Oxide Nanoparticles Modified Electrodes, Int. J. Electrochem. Sci, 7 (2012) 3350 – 3361.

[43] Robert B. Dopp, “Hydrogen Generation via water electrolysis using highly efficient nanometal electrode”, announced in a website, http://www.qsinano.com, last visited 9/27/12.

[44] Matthew D. Merill, “Water Electrolysis at thermodynamic limit”, Ph. D. Thesis, Florida State Univ., 2007, http:// etd.lib.fsu.edu/theses/available/etd-09092007-185842/unrestricted/MerrillMFall2007.pdf, last visited at 9/1/2012.

[45] Debmalya Roy, P. K. Basu and S. V. Eswaran, “Photoresists for microlithography”, Resonance, Vol. 7 Num. 7 , 44-53, July 2012.

[46] Andrew P. Schuetze, Wayne Lewis, Chris Brown, and Wilhelmus J. Geerts, “A laboratory on the four-point probe technique”, American Journal of Physics, Volume 72, Issue 2, 149, 2004

[47] S. P. S. Badwal, F. T. Ciacchi and D. V. Ho, “A fully automated four-probe d.c. conductivity technique for investigating solid electrolytes”, Journal of Applied Electrochemistry, Vol.21:721-728 (1991)

[48] Gaston A East and M A del Valle, “Easy-to-make Ag/ AgCl reference electrode” Journal of Chemical Education, Volume 77, Issue 1, p. 97 (2000)

[49] Maksimov, Igor; Ohata, Masaki; Asakai, Toshiaki; Suzuki, Toshihiro; Miura, Tsutomu; Hioki, Akiharu; Chiba, Koichi, “Temporal stability of standard potentials of silver–silver chloride reference electrodes” Accreditation and Quality Assurance, Volume 17, Issue 5, pp. 529 – 533 (2012)

Multilevel Distance Labeling - A Wireless Network Problem

ABSTRACT

Multilevel distance labeling is a graph-theoretical solution to the problem of frequency assignment on wireless networks. An optimal labeling reduces the range of radio frequencies assigned to radio stations and eliminates network interference. Due to the ubiquity of wireless networks, a more effective frequency assignment is an important area of study to increase efficiency and quality of communication. We model the problem by representing broadcasting stations as vertices on a graph. A radio labeling of a connected graph G is a mapping F:V(G)→{0,1,2,…} such that |F(u)-F(v)|+d(u,v)≥diam(G)+1 for each pair of distinct vertices u,v∈V(G) where diam(G) is the diameter of G and d(u,v) is the distance between u and v. The span of F, denoted span(F), is defined max(u,v ∈ V(G)) |F(u)F(v)|. Then the radio number of G is denoted

In this paper, we introduce a general method to compute the lower bound for rn(G), introduce a method to characterize solutions F on G, and prove a closed-form formula of rn(G) for the path and triangle lollipop graphs.

Introduction

Background

Wireless communication pervades modern society. Wireless internet, mobile phones, radio, and GPS are just a few of the common applications of wireless technology. An efficient and reliable wireless network must overcome a number of technical challenges; among these is the allocation of broadcast frequencies to minimize interference. An effective method of frequency coordination, a regulatory process for the mitigation of frequency interference, increases the efficiency of wireless communication. In this paper, we focus on the problem of frequency coordination in cellular networks.

Cellular systems are designed to minimize both interference and range of channel assignment through frequency reuse. In this system, the coverage area is partitioned into many cells with assigned frequencies. Since signal power is effective within a certain radius from the transmitter, reuse of similar frequency spectra becomes possible at certain distances [4]. This reuse allows cellular system designers to minimize the frequency range used for the whole system. The distance among cells that use similar frequency spectra should be minimized to increase spectral efficiency. However, if the distance is too small, users will receive frequencies from both channels, causing intercell interference [4]. Thus, a balance between spectral efficiency and inter-cell interference should be achieved. In this paper, we present a graph-theoretical model of a solution which eliminates inter-cell interference while maximizing spectral efficiency.

Multilevel Distance Labelling

We represent cellular stations with vertices on a graph G, and draw edges between vertices if the stations are geographically close. Interference among stations can occur at multiple levels, ranging from the interference between the closest stations with distance one, to the furthest stations, with distance diam(G). Given a connected graph G, for two vertices u,v∈V(G) let d(u,v) be the distance between u and v. Then a radio labeling of G is a function F: V(G)→{1,2,3,…} such that for u,v∈V(G):

The span of F is, span = max(u,v ∈ V(G)) |F(u)-F(v)|. The radio number of G, denoted rn}(G):=min(span(F)). The solutions of G are all labelings F such that span(F)=rn(G).

In past work on the problem [1], the usual method has been to find an upper bound of rn(G) equal to the lower bound of rn(G). However, neither the upper nor lower bounds are easy to establish.

This paper addresses some of the challenges and insights when searching for rn(G). In Section 2, some preliminary investigations of the distance labeling problem are presented, and in Section 3, we establish a general methodology to find the lower bound of a graph G. In Sections 5 and 6, we focus on finding the upper and lower bounds of the radio numbers of two specific classes of graphs: path and lollipop. This finds and proves the closed form expression of rn(G) for these two graph types. In Section 7, we introduce tightness graphs as a way to classify solutions and in Section 8, several areas of further research are discussed.

Preliminaries

Definitions

1. Wiggle Room: For two vertices x and y, define the wiggle room wr(x,y)=|F(x)-F(y)|+d(x,y)-(D+1). Notice that in a valid distance labeling, the wiggle room is nonnegative.

2. Tight: Two vertices x and y are called tight if wr(x,y)=0.

3. Tightness Graphs: The tightness graph GT of a labeled graph G has vertex set V(GT)=V(G) and edge set E(GT )={(x,y)| x,y ∈V(G), wr(x,y)=0}. This idea is further explored in Section 7.

4. Hopping: We define a hopping, or hopping sequence H(G) to be a permutation of V(G) such that H(G)={h1, h2,…,hn} and F(hi) < F(h(i+1)) for 1 ≤ i ≤ n-1.

5. Tight Hopping: A special case of hopping is where wr(hi, hi+1) = 0 for all 1 ≤ i ≤ n-1. These are referred to as tight hoppings

Terminology

1. A graph G is defined as G=(V(G),E(G)), where V(G) is the vertex set of G, E(G) is edge set of G, and an edge e∈E(G) is a subset of two vertices v∈V(G).

2. The distance d(x,y) for x,y∈V(G) is the length of a shortest path between x and y

3. The diameter diam(G) or D of a graph is the maximum distance between two vertices v∈V(G).

4. The path graph Pn is a graph with vertices V(Pn)={v1, v2,…, vn} and edges E(Pn)={(v1, v2),(v2, v3),…, (v(n-1), vn)}.

5. A triangle lollipop graph TLn is a graph with vertices V(TLn)=V(Pn) and edges E(TLn)=E(Pn)U(vn,v(n-2)). For sake of convenience, let us call vn the “lollipopped” vertex.

Observations

1. Do Not Repeat Labels:

Claim: No two vertices can have the same labeling.

Proof by Contradiction: Assume that there exist two vertices x,y such that F(x)=F(y). By definition, d(x,y)+|F(x)F(y)| > D and d(x,y) > D. However, this is a contradiction, as d(x,y) ≤ D

2. An Obvious Upper Bound:

Claim: rn(G)≤(n-1)∙D.

Proof by Construction: Let our labeling F:V(G)→{0, D, 2D,…,(n-1)D}. This is a distance labeling, because |F(x)F(y)| ≥ D and d(x,y) ≥ 1, which complies with the distance labeling definition that d(x,y)+|F(x)-F(y)| ≥ D+1.

3. The Inverse Solution:

Claim: Given a solution F of G, there exists a corresponding solution F’ of G

Proof by Construction: Let F’(vi)=rn(G)-F(vi). The assignment F’ is also valid, as d(x,y)+|(rn(G)-F(x))(rn(G)-F(y))|=d(x,y)+|F(x)-F(y)|>diam(G). Furthermore, we have span(F)=span(F’). We call this equivalent solution F’ the inverse solution of F. Below is an example of an inverse solutions S1 and S2.

4. A Flawed Labeling:

In initial investigations of multidistance labeling on Pn, the following labeling algorithm was conceived: Let us define a permutation P of V(G) such that

and Pi is the ith element in the permutation. Let F(P0)=0. Then, for all i, label F(P(i+1)) such that wr(F(Pi), F(P(i+1)))=0 (see Definition 2.1.1), and F(Pi)<F(P(i+1)) for 1 ≤ i ≤ [(n+1)/2], and F(Pi) > F(P(i+1)) for [(n+1)/2] ≤ i ≤ n. (S1 in Figure 2.2.1 is an example of this algorithm.)

Figure 2.2.1. Inverse Solutions on P5
Figure 2.1.1. Example Path Graph
Figure 2.1.2. Example Triangle Lollipop Graph

It is easily verified that this labeling is valid. We also see that span(F)=F(v([(n+1)/2]+1) ). After some calculation, we see that:

Although this algorithm gives rn(G) for all paths with at most 6 vertices, it no longer works for 7 or more vertices, as it was disproven by the computer (see Appendix B problems.). However, this labeling is insightful in that it recognizes that rn(G) grows approximately as (n2), indicating that rn(G) is likely a quadratic function.

With these observations and definitions, we have some interesting tools to approach the distance labeling problem. Of particular interest is the notion of tight hopping. Some experimentation will reveal that not all tight hopping sequences lead to valid distance labelings. Below, we establish a lemma that places proper restrictions on hopping to ensure that it generates a valid labeling for the case of a path graph Pn and triangle lollipop graph TLn

Distance Labeling on Paths and Triangle Lollipops

In this section, we establish some rules and restrictions on tight hoppings for Pn and TLn to ensure that the resultant labeling is a valid distance labeling.

Tight Hopping Rules on Paths and Lollipops

The following restrictions are necessary and sufficient to ensure that a tight hopping on a path or lollipop is a valid distance labeling:

1. If our hopping sequence contains hi→h(i+1)→h(i+2), such that d(hi, h(i+1)) > d(h(i+1), h(i+2)) and d(xi, x(i+2))=|d(hi, h(i+1))-d(h(i+1), h(i+2))|, then d(h(i+1), h(i+2)) ≤ (D+1)/2.

2. If our hopping sequence contains hi→h(i+1)→h(i+2), such that d(hi, h(i+1)) < d(h(i+1), h(i+2)) and d(xi, x(i+2))=|d(hi, h(i+1))-d(h(i+1), h(i+2))|, then d(hi, h(i+1)) ≤ (D+1)/2.

Notice that if our hopping sequence is hi→h(i+1)→h(i+2), with d(xi, x(i+2))=d(hi, h(i+1))+d(h(i+1), h(i+2))|, then min(d(hi,h(i+1)), d(h(i+1), h(i+2))) ≤ (D+1)/2 - as we cannot partition a path of length D+1 into two parts with length greater than (D+1)/2. With this, we notice that the restrictions placed on tight hoppings are equivalent to the following restriction:

We prove that this rule will make the tight hopping on a path a valid distance labeling, by showing that this labeling satisfies the definition |F(u) - F(v)| ≥ diam(G) - d(u,v) + 1 for any two vertices u and v. First, we note that any two vertices hi and hj with |i-j|=1 will satisfy the relationship, as wr(hi, hj)=0. Further, any two vertices hi and hj with |i-j|=2 satisfy the definition. Let d(hi, h(i+1))=d1 and d(h(i+1), h(i+2))=d2. Now, we may express d(hi, h(i+2))=d3 in terms of d1 and d2. First, we establish the value of h(i+2) compared to hi

From these equations we get F(h(i+2))-F(hi)=2D+2-d1d2.

Now there are two cases:

Case 1: d3 = d1 + d2

Clearly, if two hops are in the same direction, we have d3=d1+d2. Then, by the distance labeling definition, F(h(i+2))-F(hi)≥D+1-d3. Since we already know F(h(i+2))-F(hi)=2D+2-d1-d2, we have

Since D+1>0, when hopping twice on a path in the same direction, there are no restrictions on the values of d1 and d2, other than d1+d2 ≤ D+1 → min(d1, d2) ≤ (D+1)/2 as desired.

Case 2: d3 = |d1+d2 |

In this case, the two hops are in opposite directions. Without loss of generality, we may let d1>d2. Then, by definition we have, F(h(i+2))-F(hi) ≥ D+1-d3. Again, since F(h(i+2))F(hi)=2D+2-d1-d2, we have

Since we let d1>d2., we have D+1 ≥ 2d2→(D+1)/2

as desired.

For all cases of vertices hi and hj with i-j≥3, we prove that

our restrictions on tight hopping satisfy the definition through induction. We already have two base cases: i-j=1 and i-j=2 from above. Now, assume that all pairs of vertices hi, hj with i-j ≤ k have wr(hi, hj) ≥ 0. Then, if we consider the hopping sequence hi→h(i+1)→∙∙∙→h(i+k)→h(i+(k+1)), we see that wr(hi, h(i+k)) ≥ 0, and wr(h(i+k), h(i+k+1)) ≥ 0. Thus, the vertices hi, h(i+k), h(i+(k+1)) satisfy our above restriction on hopping. Then, we have wr(hi,h(i+(k+1))) ≥ 0. As this proves our induction step, it follows that the a tight hopping with the restriction d(hi,h(i+1)) ≤ (D+1)/2 generates a valid distance labeling on Pn

Verifying Path Labelings

Above, we showed that any tight hopping on a path or lollipop satisfying: will be a valid distance labeling.

Thus, one way to show that a given labeling F is indeed a valid distance labeling on Pn or TLn, is by showing that: 1. The labeling is a tight hopping, and 2. For 1 ≤ i ≤ n-2, we have min(d(hi, h(i+1)), d(h(i+1), h(i+2))) ≤ n/2.

The Lower Bound

We describe a method to establish a lower bound for graph G.

Total Hopping Distance

Let us consider the vertices of G in order of increasing label. Then, if G has vertices V(G)={v1,v2,…,vn}, let {x1, x2,…, xn}, be a permutation of V(G) such that F(x(i+1))>F(xi) for all 1 ≤ i ≤ n-1.

For convenience, let F(x(i+1))-F(xi)=fi and d(x(i+1), xi) = di.

By definition we have F(x1) ≥ 0 and fi ≥ D+1-di. We also define the contribution of a vertex cb(xi)=fi+di-D-1. We see that fi is minimized when cb(xi)=0 for 1 ≤ i ≤ n-1.

Now, we note that the maximum labeled vertex F(xn)=∑(i=1)(n-1)fi . There exists an assignment of di’s such that F is a valid distance labeling, so we have

Since rn(G) ≥ min(F(xn)), if we can maximize ∑(i=1)(n-1) di on a graph G, we will have a lower bound for rn(G). We call ∑(i=1)(n-1)di the total hopping distance, as it is the sum of all the distances as we hop over a sequence of the n vertices of G. Then we see that our lower bound makes sense, as increasing the total hopping distance decreases the amount we must increment the labelings. This total hopping distance can be maximized for the path graph, and the triangle lollipop graph.

However, for a general graph G, finding the maximum total hopping distance may be NP-complete due to a reduction from L(2,1) labelings [3].

Proof for Paths

There are certain classes of graphs for which we may compute the lower bound and construct an upper bound matching the lower bound. This ability to compute the lower bound is related to the ability to find the maximum hopping distance. It follows that paths are special cases when trying to find the maximum hopping distance, as distances are found by subtracting vertex numbers.

Theorem: For any n ≥ 4

We prove the result by sandwiching the value of rn(G) between a coinciding upper bound and lower bound.

Lower Bound

From Section 3, we know that F(xn) ≥ (n-1)(D+1)-∑(i=1) (n-1)di

It happens that the maximum hopping distance is different for even length and odd length paths.

Odd Length Paths

Note: if we have ∑(i=1)((2k+1)-1)di ≤ 2k2+2k-2 then we are done, as:

Claim: if ∑(i=1)2kdi > 2k2+2k-2 then ∑(i=1)^2kdi =2k2+2k-1, and there exists a vertex vi such that wr(vi)=1

Proof of Claim: Since each d(xi, x(i+1))=|j-j’| if xi = vj and x(i+1)=vj’, ∑(i=1)2kdi is the sum of 4k j’s where half of the j’s are positive, the other half are negative, and 1 ≤ j ≤ 2k+1. Furthermore, 2k-1 terms appear twice, and 2 terms appear once. (The terms appearing once represent the min and max labeled vertex).

Figure 5.1.1. Assigning values of j on P5

Then, to maximize ∑(i=1)2kdi, we need to minimize the absolute values of the negative terms and maximize the values of the positive terms. There are two cases achieving this maximum summation:

Case 1: We have positive values of j belonging to {k+2, k+3,…, 2k+1}, each of which appears twice (note that this is 2k terms), negative values of j belonging to {1, 2,…, k-1}, each of which appears twice, and negative values for k and k+1 both appearing once.

Case 2: We have positive values of j belonging to {k+3, k+4,…, 2k+1}, }, each of which appears twice, positive values for k+1 and k+2, and negative values of j belonging to {1, 2,…, k}, each of which appears twice.

In both cases, we get:

In Case 1, we have v(k+1)=x1 and vk=x(2k+1). (It does not matter if the positions of x1 and x(2k+1) are switched due to the inverse solution). Since each di is composed of a positive component and a negative component, we know that if xi ≥ k+2 (in part B below), then x(i+1) ≤ k+1.

Now, consider xi=1. Then, x(i-1) ≥ k+2 and x(i+1) ≥ k+2 . However, this contradicts our tight hopping criterion from Section 3.1. Thus, since both distances di-1, di ≥ n/2, we know that either cb(x(i-1))=1 or cb(x(i+1))=1, which forces our maximum value to increment by at least 1.

In Case 2, we have v(k+1)=x1 and v(k+2)=x(2k+1). Like the previous case, we know that if xi ≥ k+1, then x(i+1) ≤ k. Now, consider xi=2k+1. Then, x(i-1) ≤ k and x(i+1) ≤ k. This again contradicts our hopping criterion above. As both distances di-1, di ≥ (D+1)/2, we know that either cb(x(i-1))=1 or cb(x(i+1))=1, which forces our maximum value to increment by at least 1.

By taking the sum of all inequalities and accounting for the contribution of 1, we have

and we have shown the lower bound for odd paths.

Even Length Paths

Finding this lower bound is simpler than for odd length paths, as there is only one way to maximize the hopping distance. This is due to the even split between positive and negative terms of j

Claim: ∑(i=1)(2k-1)di ≤ 2k2-1

Proof of Claim: Using the above logic, we know that we have 4k-2 terms of 1 ≤ j ≤ 2k, 2k-2 of which occur twice and 2 of which occur once. Again, half of these terms are positive and the other half are negative. The maximization of this sum occurs when we have positive values of j belonging to {k+2, k+3,…, 2k}, each of which appears twice, positive values for k and k+1 each appearing once, and negative values of j belonging to {1, 2,…, k-1}, each of which appears twice.

From this, we get:

If follows that: as desired [1].

Upper Bound

Odd Length Paths

We label the vertices of G as follows:

Table 5.2.1. Labelling Algorithm for P2k+1

where x1=0, and each xi is tight with x(i+1) for 1 ≤ i ≤ n-1. We proceed to show that this tight hopping is a distance labeling by checking it against the restrictions placed in Section 3.1.

Note: This labeling only works for odd paths with n ≥ 7. In the case of n=5, x(2k+1) is too great with the above labeling. However, the case n=5 does follow the formula given above for rn(P5). Furthermore, all solutions generated with the above labeling have GT which are also path graphs. (See Section 7.2 for examples).

Proof of Upper Bound

Using our distance labeling verification method for paths from Section 2.6.1 and Section 2.6.2, we need min(di, d(i+1)) ≤ n/2. Checking the labeling above, for every three consecutive vertices, at least one adjacent pair is k apart. As k < n/2, we see that the above labeling method is valid.

Now, we need to show that the above method achieves the upper bound x(2k+1)=(k+1)2+(k-1)2. Since this is a tight hopping, there are no contributions from any vertices. Therefore:

Then we get x(2k+1)=2k(2k+1)-∑(i=1)2kdi , and the problem is reduced to finding the hopping distance of this specific labeling scheme.

However, finding the hopping distance of a labeling algorithm is not so difficult, and algebra verifies that ∑(i=1)2kdi =2k^2+2k-2, giving x(2k+1)=(k+1)2+(k-1)2.

Even Length Paths

We label the vertices of G as follows:

Table 5.2.2. Labeling Algorithm for P2k

where x1=0, and each xi is tight with x(i+1) for 1 ≤ i ≤ n-1. We again check against the restrictions placed in Section 3.1

Proof of Upper Bound

Once again, we need min(di, d(i+1)) ≤ n/2. Checking the labeling above, for every three consecutive vertices, at least one adjacent pair is k apart. As k < n/2, we see that the above labeling method is valid.

Now, we need to show that the above method achieves the upper bound x2k=(k-1)2+k2. Since this is a tight hopping, there are no contributions from any vertices. Therefore:

Then we get x2k=2k(2k-1)-∑(i=1)(2k-1)di. It is easy to verify that ∑(i=1)^(2k-1)di =2k2-1, giving x2k=(k-1)2+k2

As from Section 5.1 we got the lower bound of rn(Pn) for even and odd n, and have constructed cases achieving these lower bounds, thereby completing the proof to path graphs.

Proof for Triangle Lollipops

Theorem:

To maximize ∑(i=1)2kdi , we minimize the absolute values of the negative terms and maximize the values of the positive terms. There are two cases achieving this maximum summation:

Case 1: We have positive values of j belonging to {k+2, k+3,…, 2k-1, 2k, 2k}, each of which appears twice (note that this is 2k terms), negative values of j belonging to {1, 2,…, k-1}, each of which appears twice, and negative values for k and k+1 both appearing once.

Lower Bound

We continue to use the maximum hopping distance technique. Once again, there is a different maximum hopping distance for even and odd lollipops.

Odd Length Lollipops

Case 2: We have positive values of j belonging to {k+3, k+4,…, 2k-1, 2k, 2k}, each of which appears twice, positive values for k+1 and k+2, and negative values of j belonging to {1, 2,…, k}, each of which appears twice.

In both cases, we get ∑(i=1)2kdi =2k^2+2k-3 as desired.

Even Length Lollipops

Since: if ∑(i=1)2kdi ≤ 2k2+2k-3, we have the desired lower bound.

Claim: ∑(i=1)2kdi ≤ 2k2+2k-3

Proof of Claim: We use the property that d(xi, x(i+1))=|jj’| if xi=vj and x(i+1)=vj’ (and when xi=v(2k+1), j=±2k) for all distances except for d(v(2k+1), v2k) =1, as shown in Figure 6.1.1. However, we may show that∑(i=1)2kdi does not include d(v(2k+1), v2k).

Figure 6.1.1. Assigning Values of j on TL5

Suppose that ∑(i=1)2kdi does include d(v(2k+1), v2k). We notice that the remaining distances of the triangle lollipop are in fact equivalent to P2n. Since the maximum hopping distance of P2n was 2k2-1 we have ∑(i=1)2kdi=(2k21)+1=2k2. We proceed to construct a hopping with a greater distance.

Since we have 2k+1 vertices, and 2 vertices are designated to be x1 and x(2k+1)}, we have (2k+1-2)∙2 +2 terms of j. Thus, our sum has 4k terms, with 2k positive and 2k negative.

This case is more difficult than the odd length lollipop, an observation that is pretty intuitive that this case is more complicated after completing the proofs for the path graphs.

Claim: ∑(i=1)2kdi ≤ 2k2-3 with 2 distinct values of i such that cb(xi)=1.

Proof of Claim: Using the above logic, we know that we have 4k-2 terms of 1 ≤ j ≤ 2k, with k-1 positive terms appearing twice, k-1 negative terms appearing twice, and 1 positive and 1 negative term appearing once. The maximization of this sum occurs when we have positive values of j belonging to {k+1, k+2,…, 2k, 2k-1 , 2k-1}, each of which appears twice, a positive k appearing once, and negative values of j belonging to {1, 2,…, k-1}, each of which appears twice, and a negative k+1 appearing once.

From this, we get:

Now, we prove that the contribution to the maximum value is 2. Let the positive terms of j in our summation occur in region B, where j ≥ k+1, and the negative terms

appear in region A, where j ≤ k

Figure 6.1.2. Calculating Contributions on TL2k

Without loss of generality, let x1=vk. Now, if we consider xi=v1 then x(i-1) and x(i+1) both occur in region B. This gives min(di, d(i+1)) > (D+1)/2cb(x(i+1)) ≥ 1. In particular, cb(x(i+1))= min(di, d(i+1)) - (D+1 )/2 for min(di, d(i+1)) ≥ (D+1 )/2.

Thus, if min(di,d(i+1)) ≥ k+1, then, cb(x(i+1)) ≥ 2 and we are guaranteed the desired contribution of 2 to the maximum value. However, if min(di, d(i+1))=k→ x(i+1)=v(k+1), then cb(x(i+1))=1. In this case, we need to find another pair of distances where both are at least k. Consider xj=v2. Then, to minimize cb(x(j+1)) we have xj=v(k+2). This again gives us cb(x(j+1))=1, so we have the two contributions to the maximum.

Our current lower bound is rn(G) ≥ (2k-1)(2k-2+1)(2k2-3)+2= 2k2-4k+6.

Upper Bound

Odd Length Lollipops

We label the vertices of G as follows:

Table 6.2.1. Labeling Algorithm for TL2k+1 where x1=0, and each xi is tight with x(i+1) for 1 ≤ i ≤ n-1. Similarly, we check against the restrictions placed in

Section 3.1

Note: This labeling only works for odd lollipops with n ≥ 7. In the case of n=5, the algorithm given above fails. However, the case n=5 does follow the formula given above. (run the code in Appendix B to Furthermore, all solutions generated by the algorithm have a consistent structure in GT.

Proof of Upper Bound

Using our distance labeling verification method for paths from Section 3.1, we need min(di, d(i+1)) ≤ (D+1)/2. Checking the labeling above, for every three consecutive vertices, at least one adjacent pair is k-1 apart. As k-1 < (D+1)/2, we see that the above labeling method is valid.

Now, we need to show that the above method achieves the upper bound x(2k+1)=2k2-2k+3. Since this is a tight hopping, there are no contributions from any vertices. Therefore:

Then we get x2k=2k(2k-1)-∑(i=1)(2k-1)di. It is easy to verify that ∑(i=1)(2k-1)di=2k2-1, giving x2k=(k-1)2+k2

As from Section 5.1 we got the lower bound of rn(Pn) for even and odd n, and have constructed cases achieving these lower bounds, thereby completing the proof to path graphs.

Now, we need to show that the above method achieves the upper bound x(2k+1)=2k2-2k+3. Since this is a tight hopping, there are no contributions from any vertices. Therefore, x(2k+1)=∑(i=1)2k D+1-di. Then we get x(2k+1)=(2k+1-1)(2k-1+1)-∑(i=1)2kdi, and once again it comes down to find the hopping distance of this specific labeling scheme.

However, finding the hopping distance of a labeling algorithm is not so difficult, and some algebra verifies that ∑(i=1)2kdi=2k2+2k-3, giving x(2k+1)=2k2-2k+3

Even Length Lollipops

We label the vertices of G as follows:

Table 6.2.2. Labeling Algorithm for TL2k where x1=0, and each xi is tight with x(i+1) for 1 ≤ i ≤ n-1. Again, we check against the restrictions placed in Section 3.1

Note: This labeling algorithm only works for n ≥ 8. All cases TL2k with k < 4 were computed by the computer.

Proof of Upper Bound

We need min(di, d(i+1)) ≤ (D+1)/2. Checking the labeling above, for every three consecutive vertices, at least one adjacent pair is k-1 apart, except in those distance pairs where cb(x(i+1)) > 0. As indicated in the section about the lower bound, it is precisely vertices v1 and v2 for which cb(vi)=1.

Now, we need to show that the above method achieves the upper bound x2k=2k2-4k+6. Since we have ∑(i=1) kdi=2k2-3, x(2k+1)=(2k-1)(2k-2+1)-(2k-3)+2. Some algebra reveals that ∑(i=1)2kdi=2k2-4k+6 as desired. So we have found rn(G) for the lollipop graph.

Tightness Graphs

Introduction

Tightness graphs, GT, are an interesting way to categorize solutions, as they reveal the underlying structure of a solution. One nice application of the tightness graph is its use in generating a labeling algorithm for new graph types. This could help generate an upper bound for the graph type in general. Using the Python code in Appendix B, we may view GT for several Pn. Then, we construct a labeling algorithm by comparing solutions with similar GT. The next subsection features several structures and examples of solutions and their corresponding GT

Structures and Examples

Solutions of P2k+1 with graphs GT that are also paths:

Figure 7.2.1. Graphs of solutions to P2k+1 and associated GT

Observe that all of these solutions follow the labeling algorithm in Section 5.2.1

Figure 7.2.2. Solutions of P2k with graphs GT with a ladder structure

Figure 7.2.3. Graphs of solutions to P2k and associated graphs GT

Notice that all of these solutions follow the labeling algorithm in Section 5.2.2. However, not all solutions to Pn have similar GT structures. Figure 7.2.3 shows two examples of alternative structures for even and odd length paths. Notice that they have different labeling algorithms from the examples in Figures 7.2.1 and 7.2.2. Instead, solutions with tightness graphs following structures from Figure 7.2.3 have separate labeling algorithms.

Figure 7.2.4. Graphs of solutions to P2k and associated graphs GT

From this section, we conclude that inspecting the structures of GT is a quick way to identify and organize solutions on G

Conclusion and Future Work

We have been able to establish a general method to finding the lower bound of any graph G using the idea of the maximum hopping distance. In Sections 5 and 6, we were able to find these lower bounds for Pn and TLn and construct upper bounds that matched these values. The key contribution of this work is that it demonstrates a method by which rn(G) can be found and proven on a given class of graph. Additionally, we were able to use computer simulation to find solutions to any new types of graph, and understand characteristics of these solutions. Further work is underway to find rn(G) of the triangular lattice graph. These graphs are of particular interest, because cellular systems generally have broadcasters located in the pattern of a triangular lattice. Thus, determining the optimal frequency coordination of such a graph will be of greater application than for path graphs or lollipop graphs.

Acknowledgements

I would like to thank Dr. Teague for his help and support of my work during this project. I was able to bounce a lot of ideas off of him, and he really helped me stay hopeful when it seemed as though my ideas had dried up.

References

[1] D. Liu and X. Zhu, Multi-level distance labelings for paths and cycles, SIAM J. Disc. Math., in press, 2006.

[2] J. A. Gallian, A dynamic survey of graph labeling, Electronic J. of Combinatorics, DS NO. 06, 16(2009).

[3] J. R. Griggs, R.K. Yeh, Labeling graphs with a condition at distance two, SIAM J. Discrete Math. 5 (1992), 586–595.

[4] Molisch Andreas, Wireless Communications, WileyIEEE, New York, 2005.

The Effect of Substrate Density on the Rate of Migration

of NIH-3T3

Fibroblasts

ABSTRACT

Previous studies have suggested connections between the migration structures in normal cells known as podosomes and the migration machinery of cancer cells. Furthermore, recent studies contain evidence supporting a relationship between tissue density and metastatic cancer risk. Given that an increase in the risk for metastatic cancer is directly related to the rate of cell migration, this experiment explored the possible relationships between metastatic cancer risk (determined by cell migration rate) and collagen concentration (tissue density’s determining factor) through the use of NIH-3T3 fibroblasts. Fibroblast cells were seeded on top of hydrogels of collagen concentrations corresponding to the elastic moduli of normal and cancerous tissue (1.0mg/mL and 4.0mg/mL, respectively). They were subsequently observed migrating into the hydrogels over a 5 to 6 hour time period, and average cell counts from the surface of each gel were noted at three time points separated by 1 or 2 hour incubation intervals. ANOVA revealed that: 1) collagen concentration does induce a significant difference on the number of surface cells present over time; but 2) the slopes of the linear fits (i.e. rate of migration) were not shown to significantly differ between collagen concentrations. These results suggest that the density of a substrate may have some effect on cell migration, without affecting migration rate.

Introduction

Certain cells in the body have a natural ability to form specialized structures that enable cellular migration. For example, white blood cells (leukocytes) must degrade extracellular matrices (ECMs) and migrate through multiple tissue barriers in order to fight off infections and foreign pathogens. Another example is seen in a newly proposed model for cell invasion, C. elegans; in order for the organism to complete normal development, a cell must migrate through an extracellular matrix known as the basement membrane [1,2].

Originally named rosettes because of their appearances in interference reference microscopy, these migratory structures are now commonly known as podosomes [3]. Podosomes typically consist of an F-actin core surrounded by various adhesion proteins such as talin, vinculin, and paxilin, as well as integrins that allow the structures to bind to the underlying substrate [4,5]. As is commonly known, actin is a major class of microfilaments, the components of the cytoskeleton responsible for cell motility.

Podosomes are classified according to their associated integrins, molecules that bind elements of the ECM and regulate ECM attachment [6]. They allow adhesive structures to form a bridge between the ECM and the cell cytoskeleton [7]. β1 and β2 integrins are associated with podosomes and play critical roles in macrophage fusion [8]. However, when podosome-like protrusions were examined in a 3 dimensional environment, only β1 integrins were shown to associate with the protrusions [5]. Similar results noting differences in podosome structure or morphology due to environmental factors have led to speculations suggesting that podosomes may play roles in helping cells sense their surrounding environments [9,10].

In order to migrate, cancer cells must detach from their original tumor sites and degrade the surrounding ECM. Interestingly, cancer cells form and extend similar F-actin rich protrusions known as invadopodia as the first step in metastasis [11]. Invadopodia found in cancer cells are analogous to podosomes and are implicated in cancer’s deadly ability to metastasize. In contrast to the shallow extension of podosomes, however, invadopodia are usually found clustered together as large actin and cortactin dots burrowing deep into the ECM[12]. They tend to be larger than podosomes, reaching measurements of 40µm2 (as compared to 0.4µm2) [7].

Invadopodia tend to penetrate their surrounding substrates very deeply, and thus are associated with a significantly more focused and higher rate of degradation than podosomes are. Like podosomes, they recruit metalloproteinases to degrade matrices; however, the invadopodia’s more aggressive migration tendencies have been attributed to its additional recruitment of serine proteinases [13].

Invadopodia have also been thought to have a role in helping the cell sense its environment. Studies have previously shown that tissue density may be related to the likelihood of developing cancer. For instance, a study in 2003 comparing a metastatic and nonmetastatic cancer found that an increase in collagen content was associated with tumor development [14]. Furthermore, research done by Provenzano et al. (2009) showed that an increase in collagen concentration caused an increase in matrix density, a condition which promoted a malignant phenotype. Changes in microenvironment corresponded with changes in density, i.e. an increase in density created a more fibrous microenvironment with fewer matrix pores, as well as an increase in matrix density and rigidity which, finally, pro-

Elizabeth Tsui

moted an invasive phenotype [15,16].

Matrix rigidity is measured through the Young’s modulus, also known as the elastic modulus [16]. The Young’s modulus measures the stiffness of an object by measuring a substance’s resistance to deformation when a force is applied, e.g. objects with high stiffness such as glass and diamonds have high elastic moduli. A higher elastic modulus is also associated with an increased density due to the more fibrous microenvironment. Normal mammary tissue has an elastic modulus of 167 ± 31 Pa, while the tumor itself has a much higher elastic modulus of about 4049 ± 938 Pa [16].

However, the relationships of podosomes and invadopodia with their environments remain poorly characterized. As was shown by Van Goethem et al. (2011), the structure of podosomes varies widely with micro-environmental shifts. An example of this is the phenomenon of podosome group arrangement. In src-transformed cells, where the src tyrosine kinase is used to change the expression of a gene that codes for a component of podosomes, the podosomes form ring shaped structures known as rosettes. By comparison, in other cells, such as osteoplasts, podosomes tend to be arranged in clusters, showing the diversity of arrangement that podosomes possess in response to cell environment. Another study highlighting this phenomenon was done by Van Geoethem et al. (2011) using Matrigel, a gel that mimics a migratory cell’s typical environment. This study found that multiple podosomes are produced during migration, perhaps indicating that podosomes not only degrade matrices, but also seek out areas of lesser density in order to perform the most efficient matrix degradation. Expanding on this finding, Carman et al. (2007) found that during lateral migration of leukocytes, dozens of podosomes formed quickly along the endothelium to probe the surrounding environment. Over nuclei, the podosomes were quickly retracted without fully migrating into the substrate, leaving shallow “podoprints.” Thus, a commonly purported hypothesis is that leukocytes use podosomes to locate areas of relatively low surface resistance in order to complete migration [9].

Regardless of recent progress, there are a number of remaining questions about podosomes that need to be addressed. For instance, what is the effect of substrate density on the migration behavior and structure of podosomes? Do cells tend to migrate faster or slower on substrates of differing densities? The answers to these questions could have important implications for our understanding of cancer metastasis, because if invadosomes do migrate preferentially because of density, variations in substrate density between tissues could be used to determine likely sites of metastasis [16].

To answer these questions, I observed NIH-3T3 Mouse Fibroblast cells seeded onto substrates of two collagen concentrations (4.0mg/mL and 1.0mg/mL) representing two substrate densities over a five to six hour period to determine if substrate density has any effect on the rate of migration of the fibroblasts. Since previous papers

have suggested a change in migratory response based on the cellular microenvironment, I expected that a difference in substrate density would: 1) create a difference in the mean number of cells present on the surface of the hydrogels over time; and 2) affect the rate of migration of cells from the surface into the hydrogels for both treatments, causing low density hydrogels to have rates of migration significantly different from high density hydrogels.

Materials and Methods

My experiment consisted of two different treatments: the two collagen concentrations, 4.0mg/mL and 1.0mg/ mL, corresponding to high and low substrate densities, or cancerous and normal tissues, respectively. The concentrations were chosen based on previous literature from Paszek et al., 2005 and Provenzano et al., 2009. In order to ensure consistency with data collection, glass slides were uniform grids the size of 18mm x 18mm coverslips and labeled by hydrogel number and collagen concentration. To ensure uniformity in hydrogel size and shape, gel molds were made by wrapping glass coverslips with Carolina Observation Gel. Molds were then mounted onto the gridded glass slides and placed in Nunc cell culture dishes for gel formation (Figure 1).

Gels for replicates 1 and 2 were made using Hystem cell culture scaffold kits (Sigma-Aldrich) according to manufacturer’s instructions. Collagen concentrations for these replicates were prepared by adding 110µL of 4.0mg/mL or 1.0mg/mL collagen solution to 15mL centrifuge tubes. 250µL of each gel solution were then pipetted into the appropriate gel molds. For replicate three, Hystem-C cell scaffolds obtained from Glycosan Biosystems were formed from a 7.5mL kit according to manufacturer’s instructions. To form the hydrogels with the low collagen concentration of 1.0mg/mL, 250µL of Gelin-S (reconstituted collagen concentration of 4mg/mL) were added to 15mL centrifuge tubes (Gelin-S is simply a powdered version of the collagen solution that was used in previous trials, suggested by the manufacturer as an alternative collagen source). 750µL of DG water was then added to the centrifuge tube

Figure 1. Hydrogel molds in Nunc cell culture dishes mounted on top of gridded microscope slides.

to achieve a final concentration of 1.0mg/mL; the tube was then mixed until a slightly viscuous, clear solution was obtained, and 1 mL of the solution was then added to a ready centrifuge tube. 500µL of the completed hydrogel was added to prepared gel molds. To form the hydrogels with the high collagen concentration (4.0mg/mL), 1 mL of Gelin-S (collagen concentration of 4.0mg/mL) was mixed with 1 mL of Hystem and 500µL of Extralink. 500µL of the complete hydrogel solution were pipetted into the appropriate gel molds and allowed to solidify.

NIH-3T3 fibroblasts obtained from the Soderling Lab at Duke University were cultured in a 37°C CO2 incubator until the cells reached about 80% confluence. Cell density was then determined with a hemocytometer, while 500µL of cell slurry at a density of approximately 5000 cells/mL were added to the solidified hydrogels; cells were allowed to attach for an hour (replicates 1 and 2) or two hours (replicate three) before taking the initial cell count. Starting cell counts for hydrogels at both concentrations were not statistically different from each other, as expected. Observation times were changed from 1 hour after initial incubation to 2 hours after initial incubation because cells from replicates one and two seemed to require more time to acclimate and attach to the hydrogels before the initial count, as is shown in Figure 2 below.

Figure 2. Cells on surface of hydrogels at 1 hour after initial incubation (left) and two hours after incubation (right).

After one to two hours, the gels were observed and photographed at 100x with a Nikon inverted scope and attached Nikon D5100 DSLR. Surface cells were counted in four randomly chosen 20.25mm2 boxes over three two hour time intervals, given in the table below. Each hydrogel had four subsamples within the treatment, and all data analysis was performed with JMP Student Edition 8 (SAS).

licate (times are given in hours after initial incubation).

Clumps of cells were counted as single cells to prevent bias towards higher cell counts. The mean cell counts of four randomly chosen boxes were used in calculating the means for each type of substrate, which was then plotted against time to determine rate of migration as represented by slope of linear fit. The mean cell number and slopes of the linear fits were then compared using JMP Student Edition 8 to determine statistical significance by ANOVA.

Results

ANOVAs of mean surface cell counts from each hydrogel were performed to determine statistical significance. The mean number of surface cells in replicate one did differ significantly over time, indicating viability of fibroblast migration (i.e. the cells were able to form the structures necessary to initiate and continue migration into the hydrogels). This result was demonstrated in all replicates. Collagen concentration (representing substrate density) was shown in replicate one to have no significant influence on mean cell counts over time (p= 0.367). This indicates that the mean number of surface cells is not affected by substrate density, which is inconsistent with the original hypothesis. However, the replicates two and three offer opposing results.

In replicate two, the ANOVA indicated that there was a significant difference in mean number of surface cells between high and low collagen concentrations (p = 0.0175), which supports my original hypothesis that manipulation of density would manifest a change between high and low collagen concentration substrates. Another interesting result lies in the shape of the graphed data. For both low and high collagen concentrations, the number of surface cells present over time seemed to decrease linearly, with RMSE values of 0.9958 (low) and 0.9941 (high). This suggests that while substrate density affects the overall rate of migration, it does not affect the linear trend in migration rate.

Table 1. Description of observation time points by rep-

Figure 3. Time (hrs after initial incubation) was plotted against the average of surface cell counts obtained from each hydrogel by collagen concentration. A linear equation was plotted for hydrogels with a collagen concentration of 1.0mg/mL (low), yielding the model: y=6.6667x +44 (R2 = 0.9958). A linear fit was also obtained

for hydrogels with a 4.0mg/mL collagen concentration (high). This analysis yielded the model: y = -8.4583x + 57.375 (R2 = 0.9941). Error bars represent ±1 standard error.

Similar relationships were seen in replicate three of the experiment. ANOVA showed significant differences in surface cells present in regards to both time and collagen concentration with p<0.0001 and p<0.0081, respectively. This shows that not only did mean cell number decrease over time, the cell counts of the low hydrogel were significantly less than the number of surface cells present on the high collagen substrates, again supporting the original hypothesis that substrate density would have an effect on the mean number of cells present on the surface of the gels over time. Linear regression lines were again shown to fit well with the decrease in mean surface cells over time, supporting the conclusion that the mean number of surface cells decreases linearly over time.

Figure 4. Time (hrs after initial incubation) was plotted against the average of surface cell counts obtained from each hydrogel by collagen concentration. A linear regresssion was plotted for hydrogels with a collagen concentration of 1.0mg/mL (low), yielding the model: y= -6.4125x + 45.067 (R2 = 0.9977) . A linear fit was also obtained for hydrogels with a 4.0mg/mL collagen concentration (high). This analysis yielded the model: y = -7.65x + 54.75 (R2 = 0.9897). Error bars shown represent ±1 standard error.

The slopes of the linear regression lines for high and low collagen concentrations for replicates two and three were compared in JMP using a t-test. However, for both replicates two and three, the slopes were not shown to be significantly different between treatments. Previous experiments speculated that a change in substrate density would correspond to some change in migration; however, this report showed that while mean surface cell counts changed between collagen concentrations, the rate of decrease between treatments was not significantly different, thereby leading to the conclusion that mean surface cell count, but not rate of migration, was affected by substrate density.

Conclusion

This experiment sought to address two questions: 1) what is the effect of substrate density on the rate of migration of NIH-3T3 fibroblasts; and 2) does manipulating the density of the cell’s environment (shown to have an effect on morphology) also affect how quickly cells move into a substrate? Three main conclusions can be drawn from this experiment. First, the mean surface cell number over time seemed to decrease linearly for both high and low collagen concentrations, meaning that substrate density had no obvious effect on the shape of the graph of the decrease in cell number. Second, analysis showed that substrate density affected the mean number of cells found on the surface of the hydrogel in two out of three replicates, largely supporting my original expectation. Interestingly, the data do not support the hypothesis that a change in substrate density would create a change in the rate of decrease of surface cells (i.e. the slope of the linear fits compared between both concentrations), meaning that rate of decrease was not affected by substrate density. However, three possible circumstances may have led to error within this result. First, the low sample size within each replicate, decreased further within replicate two, may have increased variance overall. Moreover, while collagen is the largest component of the ECM, collagen is supported by a number of other substances, such as laminin, that may play a larger role in determining effect on migration. Furthermore, it is possible that the collagen concentration tested in this experiment did not present enough of a contrast to detectably alter the rates of cellular migration. As was seen in Provenzano et. al (2009), collagen concentration and elastic moduli of a substrate vary widely with an uncertainties ranging from 31Pa to 938Pa for normal and cancerous tissues, respectively. The collagen concentrations used in this experiment may have fallen within that error range, resulting in an ineffective discrepancy in tissue densities and a concomitantly undetectable change in cellular migration rates.

Future experimentation would include a larger sample size in order to decrease the amount of variance in data. Experimental systems for cell culture suspension could also be improved. For instance, as is shown in the left half of figure 2, there were certain areas in which cells may have been transferred in clumps which would lead to a decrease in cell count, since each clump was counted as one cell. Furthermore, since the boxes chosen for cell counts were random, there was no way of excluding those regions from data collection. To see if another substance plays a larger role in determining migration rate, a different ECM component could be tested in place of the collagen.. If collagen was used again, however, more collagen concentrations could be tested to see if there is a threshold collagen concentration that needs to be reached before achieving a change in migration behavior. Time interval observed could also be extended in order to see if the effects of sub-

strate density require a longer time before they produce changes in migration behavior. Future experiments sandwiching the cells between instead of on top of substrates could assess the cell’s density preferences. By giving the cells a choice, regions more conducive to metastasis can be determined. If rate of migration is affected by a change in substrate density, then a literature search can be conducted to determine tissue densities throughout the body, and metastasis rates could subsequently be compared to see if the invadopodia of cancer cells are similarly affected. This could be the potential link tying podosomes, the migration machinery of normal cells, to invadopodia, the structures that equipcancer with the ability to metastasize. While this explanation may not encompass the whole picture, identification of factors determining the direction of cell migration would represent a key advancement in the fight against cancer.

Ideally, these podosomes could be the key to detecting potential sites of metastasis. However, applications of this knowledge can only be made possible with further research. As of now, the steps leading to fully developed invadopodia are unclear. For instance, does the formation of an F-actin core spark the development of the invasive protrusion by gathering surrounding metalloproteinases, or does the gathering of the proteins lead to the development of an F-actin core [8]? Other questions could address the implications of differences between podosomes in different cell types, i.e. are the same properties universally affected in all cell types? Also, it is still unknown whether invadopodia are truly related to podosomes. As a concrete definition of both is lacking, it is difficult to say whether the relationship between these two cellular components is truly homologous, or only superficial.

Even with all of the remaining questions, possible experimental models for learning more about podosomes, invadopodia, and their roles in tissue invasion have already been proposed. The Invadosome Consortium, a group of scientists committed to learning more about these structures, is making great strides in the elucidation of these organelles. Together, these and other initiatives are slowly but surely increasing what is known about the structures that have the potential to be incredible targets in the fight against cancer.

Acknowledgements

I would like to thank Soderling and Blobe Labs at Duke University for providing NIH-3T3 Fibroblasts. Additionally, Dr. Amy Sheck and Korah Wiley, North Carolina School of Science and Mathematics, for invaluable assistance and advice. Research in Biology Peers: Ian Maynor, William Ge, Jordan Harrison, Chelsey Lin, Ashwin Monian, Jackson Mower, Aakash Gandhi, Hun Wong, Mark Kirollos, and Natalia Von Windheim for their advice and suggestions. I would also like to thank Nathaniel Doty, Glycosan Biosystems, for assistance with hydrogels. Fi-

nally, I give my thanks to the Glaxo Endowment to NCSSM for research funding.

References

[1] Carman, C.V. 2009. Mechanism for transcellular diapedesis:probing and pathfindng by ‘invadosome-like protrusions’. Journal of Cell Science 122: 3025-3035.

[2] Hagedorn, E. J. and D. R. Sherwood. 2011. Cell invasion through basement membrane: the anchor cell breaches the barrier. Current Opinion in Cell Biology 23:1-8.

[3] Pfaff, M. and P. Jurdic. 2001. Podosomes in osteoclast-like cells:structural analyis and cooperative roles of paxillin,proline-rich tyrosine kinase 2 (Pyk2) and integrin αVβ3. Journal of Cell Science 114: 2775-2786.

[4] Gavazzi, I., M. V. Nermut, and P. C. Marchisio. 1989. Ultrastructure and gold-immunolabeling of cell-substratum adhesions (podosomes) in RSV-transformed BHK cells. Journal of Cell Science 94: 85-99.

[5] Van Goethem, E., R. Guiet, S. Balor, G. M. Charriere, R. Poincloux, A. Labrousse, I. Maridonneau-Parini, and V. Le Cabec. 2011. Macrophage podosomes go 3D. European Journal of Cell Biology 90:224-236.

[6] Hynes, R. 2002. Integrins: bidirectional, allosteric signaling machines. Cell 110: 673-687.

[7] Linder, S. 2009. Invadosomes at a glance. Journal of Cell Science 122: 3009-3013.

[8] McNally, A. K. and J. M. Anderson. 2002. β1 and β2 integrins mediate adhesion during macrophage fusion and multinucleated foreign body giant cell formation. American Journal of Pathology 160: 621-630.

[9] Carman, C.V, P. T. Sage, T. E. Sciuto, M. A. de la Fuente, R. S. Geha, H. D. Ochs, H. F. Dvorak, A. M. Dvorak, and T. A. Springer. 2007. Transcellular diapedesis is initiated by invasive podosomes. Immunity 26: 784-797.

[10] Carman, C.V. and T. A. Springer. 2008. Trans-cellular migration: cell-cell contacts get intimate. Current Opinion in Cell Biology 20: 533-540.

[11] Condeelis, J., and J. E. Segall. 2003. Intravital imaging of cell movement in tumours. Nature Reviews Cancer 3: 921-930.

[12] Linder, S. 2007. The matrix corroded: podosomes and invadopedia in extracellular matrix degradation. TRENDS in Cell Biology 17: 107-117.

[13] Artym, V.V, Y. Zhang, F. Seillier-Moiseiwitsch, K. M. Yamada, and S. C. Mueller. 2006. Dynamic interactions of cortactin and membrane type 1 matrix metalloproteinase at invadopodia: defining the stages of invadopodia formation and function. Cancer Research 66: 3034-3043.

[14] Akiri,G., E. Sabo, H. Dafni, Z. Vadasz, Y. Kartvelishvily, N. Gan, O. Kessler, T. Cohen, M. Resnick, M. Neeman, and G. Neufeld. 2003. Lysyl oxidase-related protein-1 promotes tumor fibrosis and tumor progression in vivo. Cancer Research 63:1657-1666.

[15] Paszek, M. J., N. Zahir, K. R. Johnson, J. N. Lakins, G. I. Rozenberg, A. Gefen, C. A. Reinhart-King, S. S. Margulies, M. Dembo, D. Boettiger, D. A. Hammer, and V. M. Weaver. 2005. Tensional homeostasis and the malignant phenotype. Cancer Cell 8: 241-253.

[16] Gillette, B.M., N. S. Rossen, N. Das, D. Leong, M. Wang, A. Dugar, S. K. Sia. 2011. Engineering extracellular matrix structures in 3D multiphase tissues. Biomaterials: doi:10.1016/j.biomaterials.2011.05.043. Accessed: April 19, 2012.

Chitosan-modified Cellulose as Adsorbent to Collect and Reuse Nitrate from Groundwater

ABSTRACT

Nitrate pollution of water systems in the United States continues to increase, presenting hazards to humans and the environment. To remove this extremely soluble ion contributed largely by synthetic agricultural fertilizers, a cost-efficient and resource-efficient method, adsorption, has great potential compared to other options. In this study, chitosan, which becomes protonated in acidic solution, was combined with cellulose derived from cardboard. This combination of polymers yields a positively charged surface to attract nitrate. Batch studies revealed chitosanmodified cellulose to improve adsorption capacity from 0.3356 to an average 3.124 milligrams of NO3- per adsorbent mass. Using the Langmuir isotherm, linear regression was performed to describe the adsorption characteristic. Based on the fit, effective adsorption increases as more adsorbent is present, producing a relationship between adsorption site availability and resulting concentration due to increased aggregate charge and attraction to nitrate ions. Desorption was evaluated, with chitosan-modified cellulose releasing .294 – 4.7% of adsorbed amounts, indicating possibility of slow-release fertilizer use as the organic polymers decompose in soil. Compared to related materials, the investigated adsorbent had more environmentally friendly and adsorptive properties, as well as simpler production. Larger scale studies and optimization of the cellulose-chitosan ratio will improve further upon this research.

Introduction

From 1988 to 2004, the proportion of wells in the United States exceeding the national limit of nitrate concentration increased from 16 to 21 percent. Wells account for about 15 percent of the public water supply; this translates to over 3 percent of the population drinking water containing excess nitrate [1]. Nitrates can cause medical complications if consumed as well as undesirable phenomena in the environment such as eutrophication [2]. The increasing presence of nitrates in fresh water wells is a growing concern; managing the nitrogen cycle has been named one of fourteen “Grand Challenges” by the National Academy of Engineering [3]. Recent comprehensive assessments on nitrate pollution have also sparked media attention related to nitrate pollution of groundwater, shown to affect the quality of life of whole communities at a time [4].

The United States Environmental Protection Agency (USEPA) has set the Maximum Contaminant Level, or MCL, to 10 mg/L nitrate as nitrogen [5].Recent trends indicate increasing levels of nitrates. As part of the nitrogen cycle, nitrate emerges from both direct input to the soil and conversion from other nitrogen-based compounds, such as ammonia. Nitrate becomes toxic when converted to nitrite in the human body, leading to medical problems such as methemoglobinemia, or blue baby syndrome, as well as higher risk for thyroid cancer. One study of a primarily farming-based community has reported health complications that may have stemmed from unusually high nitrate concentrations in the local water. The same report concluded that 96 percent of nitrate pollution in the area had come from agricultural sources [4]. Concentrations in the rest of the nation show no signs of stabilizing as agricultural demands along with fertilizer input continue to rise (Figure 1).

Figure 1. Jagged line shows historical and sampled data concerning input from fertilizer. Plotted points indicate the increase of number of wells over the MCL, a trend set to increase along with nitrogen input [1].

Nitrate Removal Techniques

The complexity of soil and water processes and the variety of substances involved make it difficult to pinpoint methods to begin uncovering the most effective means of water remediation. Current large-scale methods of decontamination are typically incomplete in removal, unreliable, or dependent on high amounts of resources or power, and are therefore costly [2].

Nitrate removal presents an even more challengingproblem; because it is extremely soluble, basic precipitation and filtration techniques are ineffective. Current technologies for treating nitrate-contaminated water include ion exchange, reverse osmosis, and electrodialysis [6]. These and other techniques are generally expensive and

Christie Jiang

not sufficiently effective, often complicating processes and factors involved. Of those listed, ion exchange presents the least cost and technology-intensive option, but it involves releasing some other like-charged substance as the pollutant is taken in.

Consideration of Nitrate Absorption

Recent studies suggest that an even simpler and more effective technique, adsorption, or attraction to the surface of a material, offers great potential [6]. Because nitrates readily leach from soil and dissolve completely in water, extensive research has and continues to be done to find adsorbents that offer an environmentally friendly, reusable, and cost-efficient solution to lessening the problem of nitrate pollution. This makes it much more desirable, but still no feasible nitrate adsorbent has been found.

Due to the promising premise of nitrate adsorption as the future primary method of decontamination, a variety of materials have been investigated. Though there is no doubt that engineered substances, such as activated materials and altered clays, could be very effective nitrate adsorbents as well, this would be counterproductive as adding new materials would potentially cause even more problems to be addressed. In a review detailing advances made in phosphorus removal, it is suggested that, ideally, the pollutants removed would be able to be used as raw materials for fertilizer [7]. In the case of nitrates, if this were accomplished, a sustainable cycle would be achieved as it would not be necessary to exploit new resources.

Among natural adsorbent possibilities, categories include carbon-based sorbents, natural sorbents, biosorbents, waste materials, and miscellaneous [6]. The wide range indicates widespread uncertainty on the topic as to what an ideal adsorbent would entail. But one very common waste product that was not listed as having been researched is paper, which is present in significant volumes and accessible for studies.

Waste paper comes in many forms and in large volumes, making it an attractive possibility. After saturation, it could be repurposed for agricultural use. In addition, it offers potential for both physical and chemical modification. A variety of other adsorbents have been investigated, and paper could be a viable option, having comparable surface area, texture, and chemically unreactive composition. Paper is relevant to previously mentioned materials not only in terms of being environmentally friendly but also in terms of composition and properties. The characteristics of a desirable adsorbent are significant surface area and volume on and in which the target substance can collect. Cellulose, the main component of paper, has been experimentally determined as one with high values for desirable adsorbent characteristics as compared to other similar fibers. Using Brunauer, Emmett, and Teller (BET) theory which determines physical adsorption on a surface, and related isotherms, surface area of 0.45 square meters per gram and total micropore volume of 0.50 cubic millimeters per gram of cellulose were determined [8]. Although

these values are not comparable to those of, for example, activated carbon or organoclays, which have values greater by several orders of magnitude, they are sufficient to indicate potential for improvement and use as adsorbents.

Cellulose has a slightly negative surface charge due to outer hydroxyl groups (Figure 2.a) that would repel likecharged negative nitrates, and as a polymer it is not strong enough to induce ion exchange. Therefore modification of surface charge is necessary. Chitosan, a natural polymer of dried crab and shrimp shell matter, can become positively charged when its outer amino groups (Figure 2.b) are protonated, which suggests potential for nitrate removal as it has been reviewed for effective removal of the negative chromate ion [9].

(a) (b)

Figure 2. (a) Structure of cellulose unit with hydroxyl groups, (b) structure of chitosan unit with hydroxyl groups and amino groups [10].

To this end, a need for effective and green nitrate removal technology is evident, and the proposed nitrate adsorption method may hold great potential. This study aimed to develop an effective adsorbent of nitrate based on the principles of simplicity and sustainability from cellulose and chitosan and evaluate the material for its properties and possibility of repurposing.

Materials and Methods

Preparation of Cellulose Adsorbent

A plain used corrugated cardboard box was chosen to be the source of the cellulose base for the adsorbent. The wide availability of cardboard and often minimal ink coverage made it an ideal candidate for the study.

The cardboard was cut to approximately centimeter square pieces and broken down to a pulp by soaking in water. Prior to proceeding, in order to evaluate any possible chemicals already involved in the cardboard, precipitation tests were done. Small drops of AgNO3; HNO3 and (NH4)2MoO4; BaCl2; and NaOH were added to test for chlorides, phosphates, sulfates, and ammonium, respectively. Precipitation was minimal for all tests, indicating low concern for interfering and polluting ions.

After soaking, the cellulose samples were then processed in a blender into pulp. The pulp was dried on a fiberglass screen, and final particle size after drying was

reduced to around 40 mm3 volume and 68 mm2 surface area.

Practical grade chitosan from Sigma-Aldrich was used to make a 1% chitosan by mass and 1% acetic acid by volume solution [11]. The protonated chitosan amino groups from -NH2 to -NH3+ in the presence of acid made it soluble as well as positively charged. The mixture was stirred at 80°C until the chitosan was dissolved completely, forming a viscous solution. After it was cooled, the same soaking and pulping procedure for cardboard was used as had been for the water pulped cellulose. Dried chitosan-modified cellulose formed a thinner and more brittle sheet, so final particle size was reduced to the same amount of approximately 68 mm2 surface area but only 24 mm3 volume.

Preparation and Measurement of Nitrate Adsorbent

Because potassium is another common chemical used in conjunction with nitrates in fertilizer, solid crystal KNO3 was used throughout the experiment to make the standard nitrate solutions to be relatively realistic to actual environmental situations. Units of nitrate as nitrogen were used for concentration. The concentration of 50 mg/L NO3 as N was made and used for batch studies.

A Vernier Nitrate Ion-Selective Electrode (ISE) probe was used to measure nitrate as nitrogen concentration and calibrated by voltages from standards of 100 and 1 mg/L NO3 as N. In all concentration and mass values of nitrate, the units used were in NO3 as N. LoggerPro software was used to collect data from the probe.

Batch Studies

Adsorption experiments were primarily carried out using batch studies. To generalize the characteristic of the plain pulped cellulose, approximately 0.2 grams of adsorbent were added per 50 milliliters of equal concentration nitrate solution in a beaker for each sample. An individual magnetic stirrer for each beaker was set to 600 rpm, and samples were stirred for one hour. Samples from each beaker were then centrifuged and left for more contact time for at least 48 hours. They were then centrifuged once again. Before data collection, ammonium sulfate ionic strength adjuster (ISA), 2M (NH4)2SO4, was added in the ratio 2:100 to the sample volume to reduce possible measurement interference from other content in the sample. During collection, the ISE was held in place in each centrifuge tube for at least one minute for values to equilibrate.

Samples from batch studies needed to be centrifuged before measurement so that the ISE would only come in contact with solution and not pieces of adsorbent. Overall, the contact time for samples was maintained to be approximately 48 hours as described previously, but variations of centrifuge procedures were tested for effectiveness. Three methods, denoted as CB1, CB2, and CB3 (where CB stands for cardboard) to indicate use of plain cardboard, were used. Samples using CB1 method were centrifuged

immediately after mixing but left alone for the remainder of the time. Samples using CB2 were not centrifuged until after the 48 hours, and those using CB3 were centrifuged both immediately and after sitting, as done originally.

To compare unmodified and modified cellulose, batch studies were run with all nitrate samples originating from one 500 milliliter flask to ensure standardization. 0.2 grams of both kinds of adsorbent were used per 50 milliliters of solution. As before, all samples were stirred, centrifuged, left, and centrifuged again. ISA was added before measurement.

Adsorption Isotherms

To characterize the adsorptive behavior of cardboard cellulose with chitosan, the ratio of adsorbent-to-adsorbate was varied by running batch studies once again and changing adsorbent dosage. Adsorbent masses of 0.1002, 0.1992, 0.3007, 0.4992, and 0.6994 g were used. The relationship of dosage and adsorptive capacity to final concentration was analyzed by applying two major adsorption models, as done in published adsorption analyses [12].

The Freundlich isotherm is the most basic adsorption isotherm and describes the amount of adsorbate per adsorbent as a function of the resulting solution concentration. The isotherm is defined as

and the linear form as

where x is mass of adsorbate adsorbed per mass of adsorbent, C is solution concentration after adsorption, and K and 1/n are constants.

The Langmuir isotherm takes some more specific assumptions into consideration. In particular, the Langmuir isotherm hypothesizes uniform monolayer capacity for the adsorbent, or equal capability of all sites to adsorb. Also included is the assumption that adsorbed molecules do not interact or deposit on each other. The isotherm is defined as

and the linear form as

where x and C are the same as in the Freundlich isotherm, and xm and K are constants. In particular, xm denotes the maximum x in a monolayer of adsorbate on adsorbent.

The R2 value for the linear fit of each model was considered, and the better fit used to evaluate the adsorptive behavior of the cellulose with chitosan.

Desorption Experiment for Reuse

Adsorbent pieces contained in samples from the comparison of unmodified and modified cellulose were dried on fiberglass screen. If the pieces were to be added back into the ground as fertilizer, the characteristic of nitrate leaching out would be important. In order to examine possible significance in reusability, the adsorbents were placed in separate beakers of 6 mL of water. After three days of soaking, the nitrate as nitrogen concentration of the water was measured as a representation of desorption, to be compared to the original amounts adsorbed by the particles and considered for effectiveness of nitrate reuse.

Results and Discussion

Effect of Centrifuge Methods

adsorbed per gram adsorbent.

The data collected directly from the nitrate probe produce plots of concentration versus time (Fig. 3), which cannot be easily directly used. For the study of the effect of centrifuge methods, as in subsequent studies, several steps were taken to obtain a more useful form. The mean concentration was calculated for each sample, but only considering the latter 75% of the data for each sample. That is, the values measured from the first 25% of total time for each sample were disregarded to allow the nitrate ISE to reach stability. In particular, measurements related to the previously mentioned CB1, CB2, and CB3 methods were analyzed.

Figure 4. Average concentrations of solutions using different centrifuge methods.

Table 1. Comparison data for centrifuge methods.

Once singular data points associated with each sample contained within each set of data were determined, they were then used in their respective analyses.

Figure 3. Format of raw data when measuring concentrations (mg/L NO3) for centrifuge methods CB1, CB2, and CB3 using ISE.

Comparison

The average concentrations for samples run with no CB, CB1, CB2, and CB3 are shown in Fig. 4. To appropriately compare the methods, the concentrations were converted to mass of nitrate as nitrogen to then evaluate milligrams

The data indicates CB3 as the most effective procedure because centrifuging samples multiple times may improve adsorption, as the adsorbent and the nitrate that has already been collected on it are physically separated from the solution to decrease homogeneous desorption while still allowing for extended contact time. CB3 was used for all subsequent batch studies.

Though of these methods CB3 produced the greatest adsorptive result, a ratio of approximately 1 milligram adsorbed per gram of adsorbent leaves much room for improvement.

Figure 5. Average concentrations of solutions with unmodified and modified CB. Error bar indicates standard deviation of concentrations with chitosanmodified CB.

Table 2. Specific data for unmodified and modified cellulose comparison.

Table 3. Data for isotherm analysis, using various CB dosages.

Table 4. Modified values for linear fit. * Outlier not used in isotherm analysis.

Table 5. Adsorption isotherm values.

Figure 6. (a) Linear Freundlich fit, with Log(C) along x-axis, Log(x) along y-axis, and R-squared = .7917; (b) linear Langmuir fit, with 1/C along x-axis, 1/x along y-axis, and R-squared = .9596, indicating a better fit statistically.

Effect of Chitosan Modification

Multiple samples of both unmodified and modified cellulose were run using initial concentration taken from one flask of 50 mg/L NO3 solution. The results (Figure 5) indicate the addition of chitosan to greatly improve nitrate uptake. A typical, standard sample of unmodified cardboard is compared to samples with chitosan content, showing that an over 10% reduction in concentration is possible.

The relative strength of cellulose adsorbent with chitosan is greater than that of just cellulose. The attractive nature of a positive chitosan surface charge as compared to the negative nitrate ions is likely the explanation for the improvement.

The standard deviation of 3.48 mg/L NO3 in post adsorption concentrations is due to the non-uniformity of samples. The final sample of cardboard with chitosan produced a significantly higher concentration reading though it was taken from the same batch in the same beaker as other samples. As samples were transferred from beakers to centrifuge tubes, the distribution of cardboard pieces throughout solution was not homogeneous, causing the adsorbent-to-adsorbate ratio to be varied during the extended contact time. But even with such variations in amount of adsorbent, a significantly higher proportion of nitrate was adsorbed with the presence of chitosan, ranging from 13.495 to 32.087% as compared to plain cardboard which in this case adsorbed 2.713% of total nitrate.

Adsorption Isotherms

Tables 3 and 4 provide data produced given five different amounts of cellulose and one standard sample. The dosage of 0.1002 g produced values that were ultimately not used in the isotherms due to the uncertainty of transferring batch solutions to centrifuge tubes, where the low mass of adsorbent would lend to less certain homogeneity in mixture. Data indicated this uncertainty, as the value contributed an extreme outlier.

Linear fits were performed for Freundlich (Equation 1.2) and Langmuir (Equation 2.2) isotherms, depicted in Figures 6.a and 6.b, respectively. The linear regression values are given in Table 5, corresponding to the respective notation. The Langmuir fit was better suited to the data as given by its R-squared value of .9596 as opposed to that of the Freundlich fit, .7917.

The Freundlich isotherm is empirical, involving only the concentration and two constants, and therefore too simple to appropriately model the data. Interrelated constants and more parameters likely made the Langmuir isotherm more mathematically appropriate for the data.

Because the Langmuir regression exhibited a better linear fit, the values were taken and placed into the original equation (2.1), giving

which has a modified form and is shown graphically in Figure 7.

However, the maximum adsorbate per site, xm, value is lower than the plotted x values, so it appears that, in this case, the data does not follow Langmuir assumptions.

From the raw data, it was clear that concentration generally decreased as more adsorbent was added. However, the isotherms compare sites filled per adsorbent and resulting concentration, not just amount of adsorbent and concentration. In an ideal Langmuir fit, increasing concentrations would indicate increased proportion of adsorbate per adsorbent as more sites on the adsorbent would be available to be used. But the data presented suggests an inverse proportionate relationship instead. For the chitosan-modified cellulose, surface area and volume did not necessarily increase proportionately with mass as they were flat pieces as opposed to rough particles. Therefore it would be inaccurate to suppose that the amount of nitrate adsorbed would decrease accordingly with increasing adsorbent mass due to more open adsorption sites.

Instead, the fit seems to suggest the cellulose with chitosan become more and more apt to adsorb as more adsorbent is available and attractive (with greater aggregate charge) to the nitrate, rather than having a set number of sites that compete with each other.

Figure 7. Inverse proportionate relationship between C, resulting concentration after adsorption, on the x-axis and x, amount adsorbed per mass adsorbent, on y-axis given by Langmuir isotherm.

Desorption for Reuse

To evaluate the possibility of re-releasing the adsorbed nitrate into the ground by reusing adsorbent as fertilizer material, desorption was measured for used samples of plain and chitosan-modified cardboard cellulose. The plain adsorbent exhibited much higher desorption proportions than the modified adsorbent, in part due to the much less fibrous and more rigid structure of the latter.

Total adsorbed mass and adsorbent mass were used from Table 3 to calculate desorption values. A standard concentration was measured from the water used to soak the samples to calculate amount desorbed.

Table 6. Desorption of used samples.

The extreme polarity in desorption rates indicates a significant difference in the properties of the cellulose after modification.

Though this experiment were originally run to identify promising desorption rates for nitrate re-release in soil, the low desorption rates in fact ensure another desirable quality. The behavior of plain cardboard would suggest only very temporary adsorption because if left in solution for extended periods of time, the nitrate would leach back out. Cardboard with chitosan, on the other hand, would attract and keep adsorbed nitrate. Once added to soil, decomposition of the adsorbent as a whole would slowly release nitrate, due to the organic nature of both cellulose and chitosan, allowing for various practical applications.

Comparison to Other Absorbents

The maximum mass of nitrate adsorbed per mass adsorbent (“x”) compared to existing data for other adsorbents indicates the significance of this study in the context of all nitrate adsorption experiments. The maximum “x” was not found through Freundlich or Langmuir because the characteristic of the chitosan-modified cellulose adsorbent did not follow the basic assumptions of the isotherms. Thus considering the highest “x” values in multiple batch studies, including the adsorption study, it appears that 0.200 g adsorbent per 50 mg/L NO3 as N solution caused the most efficient adsorption and gives the closest estimate to xm.

Published papers cover a multitude of techniques and materials tested for nitrate adsorption. But many of these involve synthesis (chitosan beads, carbon cloth, organoclays) or addition of chemicals (HCl activation, dimethylamine), some of which are reactive (epichlorohydrin, which forms a carcinogen in water). Such adsorbents would not be fairly compared to the adsorbents in this research, which use existing waste and only adds a natural polymer. Therefore only xm values for adsorbents comparable in either principle or materials were considered for comparison.

Table 7. Adsorptive capacity xm of published adsorbents versus that of this study. *Average of all x values given batch of 0.200 g CB per standard solution.

Untreated natural waste products exhibit lower adsorption potential, while synthesized chitosan beads show immense adsorptive strength. Combining these two situations gives the slightly improved capacity of cellulose with chitosan. Desorption data is unknown for these materials, but it is unlikely that chitosan hydrobeads, which are less cost-efficient, more complex to form, and involve no recycled materials, would be reused in a fertilizer. Therefore, compared with the possibly reusable adsorbents mentioned in published work, the adsorbent produced in this research has considerable adsorptive strength, as well as further implications for environmental sustainability.

Conclusion

Cellulose waste in the form of cardboard was successfully characterized as an adsorbent for aqueous nitrate. Modification with chitosan improved cellulose adsorption in a standardized experiment from adsorbing 2.713% to an average of 24.97% of nitrate in solution. An adsorption isotherm study was then performed, but offered inconclusive results for describing the mechanism of the adsorbent, though a trend was identified. Low rates of desorption were determined for the cellulose with chitosan, which suggests the possibility of slow-release fertilizer use in application. Minimal desorption in solution is also promising in that adsorbed material will remain on the adsorbent even if exposed to water for extended amounts of time.

The maximum determinable adsorption capacity from the study was compared to values in published work. Only research involving natural materials, such as bamboo and straw, was considered, as those with added chemicals would increase cost environmental complications. The cardboard and chitosan adsorbent made in this study exhibited more efficient adsorptive behavior than such published work. High adsorptive capacity of synthesized chitosan beads was also considered, as it suggests improved chitosan and cellulose integration could improve adsorption as well.

Positive implications for the use of chitosan-modified cellulose include decreasing paper waste, minimizing addition of environmental hazards, reuse as fertilizer, and appreciable adsorptive ability. Column studies have been planned to be carried out to scale up the research. The results of such research would then indicate possibility of using the cellulose pieces in water filtration systems, groundwater and well treatment, or integration into other processes. A wide range of applications exist, especially because the chitosan-modified cellulose shows considerable aptitude to adsorbing nitrate.

Acknowledgements

I would first like to thank Dr. Myra Halpin for guidance and inspiration through the Research in Chemistry program at the North Carolina School of Science andMathematics (NCSSM) which provided me with lab space. Dr. Halpin sparked my interest in environmental science, specifically nitrate pollution, and helped me develop the project. I would also like to thank Dr. Monique Williams from NCSSM for supervising the project for several weeks. Next, I would like to thank Dr. Martin Hubbe and Dr. David Genereux from the North Carolina State University for answering questions I had for them on adsorption studies, materials, and groundwater treatment. I also have many thanks to my peers in the Research in Chemistry program as they also provided invaluable encouragement and input throughout the project. Last but not least, I would like to thank my family for support throughout this venture as I approached the project largely independently.

References

[1] Dubrovsky, N.M., and P.A. Hamilton (2010). Nutrients in the Nation’s Streams and Groundwater: National Findings and Implications. U.S. Geological Survey Fact Sheet 2010-3078, 6.

[2] Sparks, D. L. (1995). Environmental Soil Chemistry. San Diego, CA: Academic Press. National Academy of Engineering (2008). Manage the nitrogen cycle. NAE Grand Challenges for Engineering. Retrieved September 25, 2012 from http://www.engineeringchallenges.org/cms/8996/9132.aspx.

[3] Holbrook, S. (2012). Farming Communities Facing Crisis Over Nitrate Pollution, Study Says. Food & Environment Reporting Network. Retrieved September 25, 2012 from http://thefern.org/2012/03/farming-communities-facing-crisis-over-nitrate-pollution-study-says/.

[4] United States Environmental Protection Agency (2012). Drinking Water Contaminants. Retrieved September 25, 2012 from http://water.epa.gov/drink/contaminants/index.cfm.

Bhatnagar, A., & Sillanpää, M. (2011). A review of emerging adsorbents for nitrate removal from water. Chemical Engineering Journal, 168, 2, 493-504.

[5] de-Bashan, L.E., & Bashan, Y. (2004). Recent advances in removing phosphorus from wastewater and its future use as fertilizer (1997–2003). Water Research, 38, 19, 4222-4246.

[6] Bismarck, A., Aranberri-Askargorta, I., Springer, J., Lampke, T., Wielage, B., Stamboulis, A., Shenderovich, I. & Limbach, H. (2002). Surface Characterization of Flax, Hemp and Cellulose Fibers; Surface Properties and the Water Uptake Behavior. Polymer Composites, 23, 5.

[7] Hubbe, M.A., Hasan, S. H., & Ducoste, J. J. (2011). Cellulosic Substrates for Removal of Pollutants from Aqueous Systems: A Review. 1. Metals. BioResources 6, 2161-2287.

[8] Royal Society of Chemistry. Cellulose and Chitosan chemical structures. ChemSpider. Retrieved on September 25, 2012 from http://www.chemspider.com/ChemicalStructure.26943876.html and http://www.chemspider. com/Chemical-Structure.2342878.html?rid=b656acee9c8e-4951-9c69-480347c7db87

[9] Urreaga, J.M., & de la Orden, M.U. (2006). Chemical interactions and yellowing in chitosan-treated cellulose. European Polymer Journal, 42, 10, 2606-2616.

[10] Okeola, O. F. & Odebunmi, E. O. (2010). Comparison of Freundlich and Langmuir Isotherms for Adsorption of Methylene Blue by Agrowaste Derived Activated Carbon. Advances in Environmental Biology, 4, 329-335.

[11] Mizuta, K., Matsumoto, T., Hatate, Y., Nishihara, K., & Nakanishi, T. (2004). Removal of nitrate-nitrogen from drinking water using bamboo powder charcoal. Bioresource Technology, 95, 3, 255-257.

Generation of Electricity from the Wind Draft of Cars

ABSTRACT

We developed a theoretical turbine power output model dependent on automobile speed and turbine distance from cars. Analysis of the data from experimental field tests with rush hour traffic, controlled single-car testing, and CFD modeling showed that our turbines generated electricity, but did not support our theoretical model, which assumed laminar flow and spherical cars. Our study represents a creative implementation of wind power that may have significant economic/environmental implications for the future of renewable energy.

Introduction

Motivation

In the current era, energy is primarily obtained from coal and oil [1]. Yet, as these sources run thin, the world must look toward more sustainable sources of energy, such as renewable energy. One major form of renewable energy, and the form that is the topic of this research project, is wind energy. Currently, wind energy is not used widely or much [1], but it is a very viable source of energy in the future, especially if it is used in innovative and efficient ways. Wind power has proven to be a growing industry, especially in the past seven years [2].

The Physics of Wind Turbines

Kinetic energy from moving air can be converted to usable electrical energy [3]. The method by which this can happen involves the rotation of blades on a turbine and the use of a generator. The rotation of the blades created by the lift forces of the wind moving causes the spinning of the rotor. The resulting circular motion induces changes in the magnetic flux within the generator, thus generating electrical current.

Measurable factors dictate the power of a wind turbine [4], as expressed in the following function [5]:

method a few assumptions must be made. The assumptions are that the flow is irrotational, the flow is axisymmetric, the flow is laminar, and the object that is causing the flow is a sphere moving in the fluid field. According to these assumptions, the formula for velocity potential (as expressed through polar coordinates) is the following:

In this equation, represents velocity potential, U is the velocity of the moving sphere, a is the radius of the sphere, and (r, q) defines a polar coordinate determined by the location being studied. The following diagram depicts the scenario for this model:

Equation 1.

In the above formula, P denotes power, r represents air density, C is the coefficient of performance (efficiency), A is rotor swept area, and n signifies wind speed. It is important to note that the wind speed component of the power formula is the wind speed that goes through the wind turbine. In the scenario proposed for this research project, that speed is not actually the same as that of the moving vehicle that is causing the blades of the wind turbine to spin.

The speed of the wind through the wind turbine caused by a moving object can be calculated using the velocity potential ( ) of the fluid field surrounding the moving object [6]. In order to employ a relatively easy version of this

Figure 1. The above diagram illustrates that the model suits a scenario in which both the turbine AND the sphere’s center fall on the same plane and the sphere moves at some velocity U.

In order to determine the fluid/wind velocity at the given polar coordinate, the gradient of the velocity potential function must be taken. The resulting formula for wind velocity at the given polar coordinate after doing this gradient is:

Equation 2.

In this equation, v represents the fluid velocity vector at the given polar coordinate. This formula can be used to determine the wind velocity at any given coordinate relative to the moving automobile given the accompanying assumptions.

For a spherical body with radius a moving to the left in a fluid field at a certain speed, U the fluid velocity at a point with a coordinate (r, p/2) is the following:

Equation 3.

The following is the result when typical values for the scenario being tested are substituted into this equation:

The following is the result when the above value is substituted into Equation 1, in which typical values are also substituted:

The above value of .45W is a theoretically predicted value for an individual turbine’s power output in this research scenario.

Combining (Equation 3) and (Equation 1) results in the complete theoretical power model, which is as follows:

Equation 4.

Based on our research, it is hypothesized that if a miniature windmill is placed along the side of a road, then it will generate electricity from the wind draft of passing automobiles AND do so in accordance with the theoretical power output model derived above.

Procedures

To test the hypotheses presented, experimental roadside testing, a controlled single-car experiment, two calibration tests, and CFD tests were conducted.

The primary materials required for the three non-CFD experimentation processes were four AL Turbine Complete Wind Turbine Kits (Model Number: A0012), 100-ohm resistors, LabPros/LabQuests, alligator clip wires, and differential voltage probes. The apparatuses were arranged on plywood bases and were staked into the ground. The following is a diagram of a single wind turbine apparatus:

Figure 2. This diagram demonstrates an individual apparatus from an aerial view. This apparatus allows for real time measurement of voltage data using Logger Pro measuring devices.

One calibration test was conducted before and after the process of field experimentation, to assure consistency among the wind turbine apparatuses. For the initial calibration test, each turbine apparatus was placed at a known distance in front of a box fan at two different speeds and voltage data was collected and analyzed to confirm consistency among the turbines. For the final calibration test, the same procedure was followed except only one speed was used. Both times, the voltage output of all the turbines was found to be within 5-7% of the mean, thus showing that the turbines were well calibrated with respect to each other.

For the experimental roadside testing, a location was found at which the average speed of cars was between 40 and 50 miles per hour and there was sufficient space to place the turbines at the side of the road. At this location, the three experimental turbine apparatuses (deemed E1, E2, E3) were staked into a grassy path at the side of the road, with E1 closest to the road, the center of E2 thirty centimeters from the center of E1 (in the direction perpendicular to the road), and the center of E3 thirty centimeters from the center of E2. The control wind turbine (deemed C) was

placed in a location that, in theory, would not be affected by traffic wind. The experimental turbines were placed 2.8 meters away from each other in the direction of traffic. Each rotor was angled thirty degrees toward the road from the parallel. An anemometer was staked into the ground thirty centimeters in front of E1, at the same elevation and distance from the road as E1. This experimental setup was implemented for an hour, during which voltage data and anemometer data were automatically taken. Traffic data, which entailed the recording of traffic pulse starting times and each pulse’s accompanying number of cars in the lane closest to the turbines, was also taken by hand/eye. Three such trials were conducted. The following are diagrams depicting the location and a single turbine:

Figures 3 and 4. These diagrams depict an overhead schematic of the testing location and specs on an individual wind turbine, respectively.

The purpose of the controlled single-car testing was to generate a roadside turbine power output model based only on car speed and turbine distance. Five different car speeds were chosen for testing. The three experimental turbines were then tested at five different distances for every speed (each speed was tested twice) and voltage data was taken for every run. A control turbine was also placed in an area so as to only be affected by ambient wind. The following table shows the set of distance and speed values that were tested:

Tables 1 and 2. The above tables display the five distances (from the lane) and the five speeds (of the car) that were tested in controlled single-car testing. Note that the intervals between test values are constant.

Computational Fluid Dynamics software can be used to model real world scenarios involving fluid dynamics. This is done by solving the Navier-Stokes equation for 3D object models with specific parameters. AutoDesk CFD has a preloaded average-size car-in-a-fluid-field object

model to perform tests on, so this was used to run various computational experiments. The following is an image of this object model:

Figure 5. In this diagram, one can see the image of a car in the fluid field, where the car is oriented in the picture such that its front is facing left.

The above object model involves a stationary car in a fluid field. In order to accurately model the research scenario, the fluid was made to move at given speeds toward the car and boundary conditions were not set on either side of the car. This characterization of the research scenario accurately reflects the experimental behavior because it merely involves a change of reference frame from the road to the moving car.

There were three primary tests that were conducted on the car object model: a roadside wind speed model determination, an optimal velocity disturbance (roadside wind speed) determination, and a comparison of experimental setup to optimized setup.

Data

The first of the following four graphs represents the power data obtained from each experimental turbine for each trial in the rush-hour roadside tests. The control data was omitted for reasons that will be discussed later in this paper. The second graph displays anemometer data and traffic data for all three trials.

The third following graph(Graph 3) shows the results of the controlled single-car testing.

Graph 1. This is a graph that plots power of each experimental wind turbine vs. time. Note that the power spikes for each turbine occur at approximately the same times.

Graph 2. This is a graph that plots wind speed vs. time AND marks times that traffic pulses occurred. Note that wind speed spikes tend to correspond with traffic pulses.

Graph 3. This graph plots the power outputs of all of the turbines used in the controlled single-car testing vs. time. All of the data taken throughout the range of test values are displayed on this single graph.

Data Analysis

The presented roadside testing data were analyzed in four different ways to get a multidimensional view of the results: correlation analysis, characteristic analysis, power model analysis, and error analysis. Each of these analysis methods will be discussed in detail in this section of the paper.

Correlation Analysis

The method of defining the correlation between road traffic and increased turbine rotation followed from graphical comparisons between the traffic pulse data taken during experimentation and the other forms of data (voltage and anemometer). The hand-written traffic data was transposed to a graph and overlaid on the voltage and anemometer graphs to compare traffic pulse timings to wind speed and voltage spikes, which would indicate a connection between the passing of automobiles and the increased generation of electricity. Graph 2 provides an example of this graphical comparison across all of the trials and it can be seen that the spikes in anemometer data can be directly attributed to traffic pulses. The following is a narrowed

view of the E1 voltage data from the first trial as compared to the traffic data: (on the following page)

Graph 4. The above graph displays a short interval that displays voltage and traffic data. It is clear that there is a strong association between traffic pulses and voltage spikes

This graph also supports the correlation between traffic and electricity generation within a range of timing uncertainty. This method was applied across all three trials and the same results were found, thus supporting the initial hypothesis of correlation.

Characteristic Analysis

The characteristic analysis involved the statistical description of characteristic power spikes (CPS). CPS power values were determined by correlating the traffic pulse timings to power spikes that were created by traffic (same time as the traffic pulses) and invoking the mean value theorem to determine average values for power for each CPS. These power spikes were analyzed to directly study the effects of car-induced wind effects through the wind turbines. This method resulted in sets of data across the wind turbines and trials that represented power output during traffic pulses. These values were plotted in a bar-graph fashion as such:

Graph 5. The above graph displays one of many CPS data graphs. Each bar represents the average power value of its corresponding spike on the equivalent power data graph (Graph 1). All of the impertinent non-traffic-pulse power information was removed before the creation of this CPS data graph.

These graphs were generated for all experimental wind turbines for all three trials, independently. The statistics operation available in LoggerPro was then used to determine some characteristics of these sets of data.

The data displayed in the CPS power graphs were then used to produce frequency distributions. These distributions were compared to the corresponding Gaussian distributions using the characteristic data. The following is a graph that displays a frequency distribution:

Graph 6. The above graph displays a frequency distribution of one CPS data set. This graph and its associated Gaussian fit show that the nature of traffic patterns is highly unpredictable.

These analyses were conducted for all experimental data sets. Due to the unpredictable nature of the traffic flow, power outputs were expected to be highly varying. This theory is supported by the relatively large standard deviation values. Yet, it was found that on average the frequency distributions showed somewhat moderately good fits with the Gaussian prediction, thus indicating that traffic flow was quasi-random. The following table depicts statistical results (that support the above observations) from the characteristic analysis of the data sets in Table 3.

Power Model Analysis

In order to compare the CPS data across the traffic pulses, the E1 CPS power data was normalized to 1 and the other turbines’ data were scaled to follow the scaling of the E1 data. This produced a set of normalized power values that could then be plotted versus distance to analyze

the distance relation of the power model. The data was then plotted in a log-log fashion, with one plot per trial, to easily deduce the power relation between power and distance and the constants associated with the power model. The following is an example of such a log-log plot:

Graph 7. The above plot displays the logarithm of normalized power values vs. the logarithm of distance values. The presence of a power-relation is quite apparent from the results of this plot.

In the above graph, log(N) represents the logarithm of the normalized power values discussed previously and log(r) represents the logarithm of the distance values.

Using this plot and its accompanying linear regression, the power vs. distance relationship could be determined along with important constants expressed in the power and roadside wind speed models. These values were found for all three trials and the averaged power law result is a power relationship of -1.3 +/- .3, with a predicted optimal power-fit of –1.5. The predicted optimal power-fit in conjunction with the log-log plot y-intercept information was then used to construct the accompanying models. The following are the two proposed optimal models in terms of theorized parameters:

Equation 5.

Equation 6.

Table 3. The above table displays all of important the statistical characteristics of the experimental roadside testing data. It is clear that the highly fluctuating nature of traffic flow contributes to large standard deviation values.

The roadside wind speed model was used to scale down the anemometer data for E1 to the other two experimental turbines and thus overlay the theoretical power model values over the experimental values. An example of such an overlay over a small interval is the following:

Graph 8. The above overlay displays the raw data in blue and the power-model-predicted data in red. There is a visible time lapse due to timing differences in anemometer data collection and E2 turbine data collection, but the moderately accurate predictive power of (Equation 5) and (Equation 6) is clearly apparent in the above graph, especially from the 28th minute to the 30th minute (when the time lapse is ignored).

Error Analysis

Error analysis entailed two major components, which were voltage uncertainty and power uncertainty. Yet, in order to obtain these uncertainties many other quantities had to be known.

The formula for uncertainty in power was determined through the partial derivative method for absolute uncertainties. After utilizing this tool, the uncertainty in power was found to be:

The following is a table depicting pertinent uncertainties:

Table 4. The above table depicts the important uncertainties present in this study. It is clear that the uncertainties are fairly minimal, thus promoting confidence in the raw data values.

It is apparent that the data taken from the control turbine were omitted from this study. This is because whenever large packets of traffic would pass by the testing location, it was possible that the wind pulses could even affect the control. Thus, as a result of unforeseen circumstances the control data needed to be omitted from the study. Yet, the correlation analysis still provides sufficient evidence to show the influence of automobiles on the wind turbines.

The controlled single-car testing data was analyzed in two separate ways to generate a complete turbine power output model for the single-car scenario. The data were analyzed for a power-distance relation and a power-speed relation, as will be discussed in the following subsections.

Controlled Single-Car Testing Analysis: Power-Distance Relation Analysis

The way by which power relationships were determined for the controlled single-car testing data was virtually identical to the way by which they were determined in the power model analysis portion of the experimental roadside testing data analysis. The only difference was that each turbine was analyzed separately at a given speed and their results were compared to determine result validity. The following is a sample log-log plot for power-distance relation:

Graph 9. The above graph plots the logarithm of controlled single-car testing power values vs. three of the test distances. The strong linear relationship between the plotted variables is quite apparent in the above graph.

The regression in the above graph had a slope of approximately –3, thus proposing a power-distance power relation of –3. This relationship was confirmed by the data from other speeds and turbines as well. Thus, this section of the controlled single-car testing analysis produced the following result:

Equation 7.

Controlled Single-Car Testing Analysis: Power-Speed Relation Analysis

The power-speed relation analysis was conducted in the same manner as the power-distance relation analysis, except this time car speed was varied and the distance was held constant. Just as with the power-distance relation analysis, the turbines were analyzed separately at a given distance and then the results were compared. The following is a sample log-log plot for power-speed relation:

Graph 10. The above graph plots the logarithm of controlled single-car testing power values vs. three of the test car speeds. The strong linear relationship between the plotted variables is quite apparent in the above graph.

The regression in the above graph had a slope of approximately 5, thus proposing a power-speed power relation of 5. This relationship was confirmed by the data from other speeds and turbines as well. Thus, this section of the controlled single-car testing analysis produced the following result:

Equation 8.

The two resulting power relations were combined to synthesize an experimental single-car turbine power output model. The model is as follows:

Equation 9.

It can be noted that this model varies drastically from the experimental roadside testing power output model. This discrepancy along with the associated explanations and conjectures will be addressed in the Discussion section.

CFD Analyses: Roadside Wind Speed Model Determination

The purpose of this portion of the CFD analysis was to determine a model as a function of car speed and distance from the car for the wind speed generated from the roadside wind induced by the moving car. The method of test-

ing for this computational experiment was incredibly similar to the controlled single-car testing procedures. An array of probe points was generated in the CFD model across the Z and X dimensions. The first portion of the testing involved tests based on distance from the car, so at a given speed of 15 meters per second, the simulation was initiated and the steady state speeds at every point were determined. These values were then averaged across the axis in the direction of the car. The second portion of the testing involved tests based on car speed, so one constant distance of 2.55 meters from the car was chosen and the simulation was run at a variety of speeds. The results for both tests were then plotted as such:

Graph 11. This graph shows a plot of the logarithm of wind speeds vs. the logarithm of the distance from the car. As can be seen, some of the distance values are within the boundary layer of the car (for the speed values increase in this region), but these were omitted for linear analysis.

Graph 12. This graph shows a plot of the logarithm of speed of interest (SOI, which is another term for roadside wind speed) vs. the logarithm of the car speed. As can be seen, the there is a strong positive linear association between the logarithms of the two plotted variables.

Equation 10.

The following velocity dissipation model was generated as a result of the linear regressions done on the above scatterplots: where some constant, car speed, and the distance from the car.

The above model is incredibly similar to the roadside wind speed model generated from the experimental roadside testing: where the real exponents have uncertainties of approximately .1.

The intense similarities between the power relations in these two models indicate that wind speed could be approximated by traffic flow.

To check the validity of the roadside wind speed model, the r-relations for the model were tested across a range of Y values, from .5 meter to 3 meters, for a range of X-values and a given Z value of 12 meters. To do this, an array of points on an XY plane were generated and the roadside wind speeds were probed and plotted for each point tested. After doing this, the same procedures for model determination as before were followed for each set of data grouped into the different Y-value sectors. For each of these blocked data sets, the r-relations were determined and the values for found to vary around a center (which was approximately 1 for the Z value of 12, which does not reflect the results when using a range of Z-values as was done for the true model determination) by plus or minus .1. This shows that there is an inherent uncertainty of plus or minus .1 within the r-relation for the determined roadside wind speed model, which then furthers the argument of similarity between the experimental and CFD models due to intense overlap.

CFD Analyses: Optimal Velocity Disturbance Determination

The purpose of this portion of the CFD analysis was to determine the points of maximum wind speed, which would then indicate the optimal dimensions for a roadside wind turbine. In order to do this, wind speeds were measured at numerous points in every dimension after the simulation was run and then plotted to determine the maximum location in the X and Y dimensions. This is depicted in graphs 13 and 14.

CFD Analyses: Comparison of Experimental Setup to Optimized Setup

In the previous analysis, the location of maximum value was found to be (2.1 m, 1.7 m). The purpose of this analysis was to compare the power output of an optimized turbine

Graph 13. This graph shows a plot of the Z-velocity with respect to X-distance from the center of the car. The exact relationship between the two variables is difficult to determine from this plot, but the maximum values are easily identifiable.

Graph 14. This graph shows a plot of the Z-velocity with respect to Y-distance from the bottom of the car. The relationship here can be seen to be quite smooth and predictable and the maximum values are easily identifiable.

with dimensions to encompass this optimal location to the power output associated with the E2 turbine of the experimental roadside testing. The similarities of the velocity dissipation models between the CFD and roadside analyses shows that the CFD could approximate roadside events well, but yet another sub-test was conducted in order to further the legitimacy and validity of the CFD so as to allow a comparison of the experimental setup to the optimized setup. In order to do this, the simulation was run with the exact conditions of the roadside scenario. Table 6 summarizes these conditions.

Table 5. The above table displays the experimental roadside testing conditions. Certain values out of these parameter specifications are certainly small and non-ideal and thus have room for improvement.

Table 6. The above table displays the optimal turbine/environment conditions. It can be seen here that the largest differences between these parameter specifications and the experimental ones lie in the area value, the car speed value, and the Y coordinate value.

The coordinate and car speed conditions were imposed upon the simulation and then the steady-state values across a range of Z-values determined to approximate the careffect time interval were recorded. These values were used to calculate power outputs at every point and then were averaged across the specific Z-axis. The resultant simulated E2 power output value was found to be .044 W. The actual average E2 power output (during traffic pulses) was found to be .042 W in the experimental roadside tests. The closeness of these two values furthers the case for using the CFD model as an accurate approximation for roadside conditions. As such, the comparison of experimental to optimized setup could then be done and the results could have significant value.

Following the simulation of E2 conditions, the CFD power output was known. The only remaining value necessary to conduct a comparison of experimental to optimized setup was the power output of the optimized setup. The process involved in finding this was the exact same as for the simulation of E2 except with the following parameters outlined in Table 6.

The imposition of these parameters generated a Z-averaged power output of 122 W. The ratio of this power output (optimized setup) to that of the experimental setup is 2782.

As such, the mere implementation of a new turbine design to achieve the optimal parameters, which are within reasonable bounds, could theoretically increase the power output observed through the experimental roadside tests almost 3000-fold.

Discussion

The primary objectives of this research were to determine whether the proposed idea of using miniature wind turbines on the sides of roads would generate electricity from the wind draft of cars and to test the theoretical power output model generated for this scenario. It was hypothesized that the idea would generate electricity and the power output model would fit the data, but the data and its following analysis showed results of a different nature.

The first prong of the hypothesis was undoubtedly supported by the data as was shown in the correlation analysis. The second prong of the hypothesis was not supported, but elements of the power model associated with it followed through to the final proposed models. The initial power model was the following:

There is definitely a high degree of uncertainty in the distance relation due to the turbulent nature of the scenario and random nature of traffic flow, but this optimal proposed model fits experimental power data very accurately and the power relations are supported by the CFD testing. The primary reason for the invalidity of the theoretical model is probably the inapplicable assumption of laminar flow in the theoretical roadside wind speed model.

Although efforts were taken to produce as high quality data as was possible, there are certainly areas of potential improvement. The turbine design could have been slightly more aerodynamic, the generators could have been slightly more efficient, the data set timings could have been better synchronized, and the data collection tools could have been more precise. Furthermore, there were certainly sources of error within the process of experimentation such as the natural imprecision of the testing equipment, slight inconsistencies in turbine yaw, and potentially imprecise distance measurements. These errors, compounded with the unpredictable nature of turbulent wind and traffic, are expressed as higher uncertainties and lower confidence levels in the results.

The controlled single-car testing analyses produced a power output model that disagreed with both the theoretical model AND the roadside model. The following is the controlled single-car testing power output model:

where k is a constant.

This discrepancy between roadside large-scale traffic power output and single-car power output led us to the conjecture that there must also be an additional traffic parameter involved in a holistic turbine power output model, which may affect the power-distance and power-speed relations. Research into this traffic-parameter-based holistic turbine power output model is an intriguing potential topic of future study.

Along with research into the potential traffic parameter component of a holistic turbine power output model, there are many other possible offshoot branches from this experiment that could be studied as future work. One such possibility is the study of the roadside “funneling effect”. This effect was observed during experimentation to be the compounding of wind in abnormally high-speed gusts at the side of the road whenever large packets of relatively fast moving automobiles passed. Another possible topic of future interest is Computational Fluid Dynamics modeling of the aerodynamic scenario we studied in this research project. Yet another possibility for future work could involve the studying of turbine design and roadside turbine arrays to potentially achieve the theoretical optimal power values described in the CFD analysis section. All of these potential topics of future study are key components of the remaining steps in furthering this concept of roadside electricity

generation to make it economically feasible. At this point in time, based on our research, the idea is certainly not feasible. Yet, without a doubt there is a possibility that improvements could be made that could allow this idea to become the next innovative foray into the field of renewable energy.

Acknowledgements

I would like to thank Dr. Bennett and Mr. Milbourne, my mentors. I would also like to thank the Research in Physics program and the NCSSM Board of Trustees for allowing me to pursue this fantastic opportunity.

References

[1] “Primary Energy Sources â Fuels at the Heart of the Matter.” Classroom Energy. N.p., n.d. Web. 08 Feb. 2012. <http://www.classroom-energy.org/energy_09/3.html>.

[2] “Wind Energy Companies.” Wind Energy Companies. Web. 09 Feb. 2012. <http://www.greenchipstocks.com/articles/wind-energy-companies/273>.

[3] “Wind Energy Basics.” Wind Energy Basics. N.p., n.d. Web. 12 Feb. 2012. <http://windeis.anl.gov/guid/basics/ indes.cfm>.

[4] Constantino, D. “Winning with Wind.” Pit & Quarry (2008): 2. Web. 13 Jan. 2012.

[5] Kovarik, Thomas J., Charles Pipher, and John A. Hurst. Wind Energy. Northbrook, IL: Domus, 1979. Print.

[6] Batchelor, George K. “6.8.” An Introduction to Fluid Dynamics. Cambridge [u.a.: Cambridge Univ., 2009. Print.

Shocking Discoveries: The Applications and Putative Mechanisms of the Effects of Electric and Magnetic Fields on Plants

“Life and death appeared to me ideal bounds, which I should first break through, and pour a torrent of light into our dark world.”

- Dr. Victor Frankenstein, Frankenstein by Mary Shelley

Introduction

Images of Frankenstein’s creation of life through the powerful force of electricity can be found everywhere throughout popular culture. This ubiquitous motif reflects a human fascination with the seemingly supernatural properties of electromagnetism, a power considered so great that we have often imagined it can achieve the impossible; even bring the dead back to life. The study of electromagnetic energy in living things, bioelectromagnetism, began in the late 1700s when the Italian scientist Luigi Galvani discovered that applying static electricity to frog legs caused them to move. Since then, fiction has used the imagined power of electromagnetism to create villains such as Frankenstein’s monster and in the powers of superheroes such as Magneto or Storm.

Yet perhaps just as shocking are the real-life applications of bioelectromagnetism, specifically in its potential use in plants and agriculture to improve plant growth, yield, germination rate, and nutrition. Even simply exposing plant seeds and irrigation water to different types of electromagnetic energy have proven to achieve similar effects in a wide variety of plants and even livestock [1]. These results suggest that the exposure of plants to magnetic and electrostatic fields could provide an ecologically friendly, affordable way to increase crop production without polluting the soil with chemical fertilizers [2]. There seem to be few, if any, drawbacks to these novel approaches; a review paper on the potential genotoxicity of electric and magnetic fields dismissed claims in 34 studies that extremely low frequency electric and magnetic fields could harm plant genomes [3]. There are still, however, many issues to resolve before we can approach a comprehensive understanding of various types of electromagnetic energy on crops and eventually apply this knowledge on a large scale: i.e., which methods of applying electromagnetic energy best improve plant growth and yield, whether these methods can increase nutrition and productivity as well as growth, and most importantly, what the causes behind these astonishing findings are.

Varying Methods of Applying Electric and Magnetic Fields to Plants

In 1930, the Russian researcher Savostin conducted one of the first studies on the effects of magnetic fields on seeds when he observed increased oxidation rates in wheat seedlings under magnetic conditions [4]. Over a decade later, researchers observed changes in seed germination due to magnetic fields [5]. Since then, researchers have tested a wide variety of methods of exposing plants to electric or magnetic fields to improve their germination and growth.

Between electric and magnetic fields, there is a wide array of different techniques of applying electromagnetic energy to plants in hopes of improving their growth and yield. Electric fields are forces formed by a difference between static charges, while magnetic fields are forces created by moving charges. Although these forces are different, researchers have observed strikingly similar results in plants whether using electric or magnetic fields.

Some researchers have achieved improvements in plant growth and yield by exposing plants to magnetic fields while they grow. Naturally, all plants grow in the presence of magnetic fields, as the earth’s magnetism creates a global magnetic field. However, by applying magnetic fields beyond the local geomagnetic field, researchers have achieved impressive results in improving growth and yield. Novitsky et al. grew green and bulb onions under horizontal permanent magnetic fields of strengths around 43 A/m produced by Helmholtz coils (Figure 1) and observed increased bulb sprouting in both green and bulb onion [6]. The same study also found that the magnetic fields accelerated sprouting in both types of onions, which the study attributed to cell elongation; however, this stimulating effect of permanent magnetic fields only lasted in the sprouting stage of the onion’s growth [6]. The magnetic fields were also found to increase plant yield by increasing the number of sprouts in green onions and the number of sprout bunches in bulb onions; this focused growth suggested that permanent magnetic fields enhance the genetically determined growth patterns seen in untreated onions [6]. Permanent alternating magnetic fields, created by continuous, alternating electric currents, have also

Methods of Exposing Plants to Magnetic Fields

been shown to improve plant growth and yield. Eşitken and Turan grew strawberry plants beneath electric wires through which an alternating current was passed, creating an alternating magnetic field which magnetically treated the plants beneath the wires (Figure 3) [7]. They found that weaker magnetic fields (0.096 T) increased fruit yield, fruit number per plant, and average fruit weight in strawberries compared to the control and treatments with stronger fields [7], showing that continuous exposure to alternating magnetic fields can increase plant yield just as continuous exposure to nonalternating fields has [6].

Similar results have been achieved in a number of studies on the effects of pre-sowing magnetic treatments of plant seeds in a variety of plants. In such studies, seeds are exposed to magnetic or electric fields before sowing, and the effects on growth and yield are observed. Some of these studies used Helmholtz coils (Figure 4) to expose seeds to magnetic fields ranging in strength from 0 to 10 mT [2, 8, 9]; others used different methods to magnetically treat seeds, such as placing the seeds between two bar electromagnets [10]. Researchers have observed that pre-sowing treatment with pulsed magnetic fields improved plant growth and yield [8], while pre-sowing magnetic treatments on tomato seeds have been found to significantly improve percent germination rates, plant growth, yield, and fruit size [11]. In contrast, electric fields with greater intensities and certain exposure times were found to inhibit germination in tomato seeds [10, 11]. Even more interesting, however, magnetic treatments ranging from 100 to 170 mT and 3 to 10 minutes were found to significantly delay the onset of geminivirus and early blight in tomatoes and were observed to cause a reduced infection rate of early blight [10]. Similar effects, such as increased germination and yield, have been repeated in the nonfood crop cotton as well [2]. However, as seen throughout all of these studies, the effects of magnetic fields on plants differ from one species to the next and even between varieties of the same species [2].

Stationary magnetic fields have also been demonstrated to increase the germination rate and growth of plants. The stationary magnetic fields used in such experiments are produced by the permanent magnets found in everyday life. Researchers have found that exposing corn seeds to stationary magnetic fields at various strengths increased their height and weight as seedlings, with significant improvements observed when the seeds were exposed for 24 hours or more at magnetic field strengths of 125 mT and 250 mT [12]. As a reference, kitchen magnets have a strength of around 5 mT, sunspots a strength of about 150 mT, and loudspeaker magnets a strength of around 1 T. Similar studies have shown that while magnetic fields improve plant germination rate, the fields do not affect the total amount of germination under laboratory conditions (as opposed to field conditions), as the increased germination rate only allowed the plants to reach the same saturated amount of germination at a faster pace than the control [11, 12]. However, magnetic field exposure has been shown to increase the total number of germinated seeds under field conditions [2]. Yet another approach to increasing germination in plants involves subjecting seeds to high voltage electric fields (HVEF) by placing them between metal plates charged with high voltages, producing electric intensities equal to 450 kV per meter of space between the two plates. HVEF have been found to increase the germination rate and total germination of aged wet rice seeds and significantly increase their vigor [13, 14].

Even more intriguing studies have observed that magnetic treatment of irrigation water can improve crop and livestock yield [1, 15]. Studies such as these run water through magnetic treatment devices, in which water is run through a pipe positioned between two magnets [15]. Fas-

Figure 6.
Figure 1. Helmholtz coils
Figure 2.
Figure 3. Helmholtz coils
Figure 4.
Figure 5.

cinatingly, similar effects are found in livestock as well. Lin and Yotvat tested the effects of magnetically treated irrigation and drinking water on cows, geese, sheep, turkeys, and melons. Treated cows were found to produce more milk, be more fertile, and grow faster than the control group, while treated geese and turkeys were heavier than the controls [1]. Treated sheep’s milk, meat, and wool yield were all increased [1]. Lin and Yotvat concluded that these data purport the benefits of using magnetically treated water on an agricultural scale, including applications in fish farming, algae, produce, and livestock, using electromagnetic units for treating water which are already commercially available [1].

7.

The magnetic treatment of water has also been suggested to cause treated plants to use water more efficiently. Maheshwari and Grewal found that celery irrigated with magnetically treated water experienced significant increases in productivity (weight produced per volume of water used) [15]. Although the total amount of water used for the treated plants was the same as the control, the increased yield induced by the magnetic fields accounted for the increase in water productivity [15]. However, few, if any, other studies have investigated the effects of magnetic fields on water productivity; further studies must be done in this area to either support or counter claims that magnetic fields may generally increase water productivity. If proven, increased water productivity in plants under magnetic treatments could add to the already sizable number of benefits magnetic fields lend to plants and would be especially useful for farmers growing crops in dry regions.

As can be seen, a large number of studies have used electromagnetic energy to increase germination, growth, and yield in plants through a variety of methods: continuous exposure to magnetic fields, seed exposure to magnetic fields, seed exposure to high voltage electrostatic fields, and irrigation and drinking water exposure to magnetic fields. However, no studies have compared any of these methods side by side to see which is most effective in improving plant germination, growth, and yield. Little research has been done on how different plants are affected using the same method of exposing plants to a certain type of electric or magnetic field. While researchers have found many specific benefits of electric and magnetic fields on plants, broader studies should be conducted to find basic mecha-

nisms which could then be applied to a larger number of crops and methods to pave a path for the potential largescale agricultural use of bioelectromagnetics in the future.

Effects on Nutrition and Productivity

Some of the studies on the abilities of magnetic fields to improve crop yield have also found that magnetic fields increase certain nutritional values of plants. Novitsky et al. observed that magnetically treated onions contained larger amounts of chlorophyll and protein, but not carbohydrates, in comparison to the control [6]. Lin and Yotvat found the magnetic treatment of water increased the sugar content of melons and made the meat of treated cows leaner compared to controls [1]. However, few studies have studied the effects of magnetic fields on plant nutrition besides studies focusing on nutrient uptake as a potential explanation for increased germination and growth in magnetically treated plants. Additional research is needed to explore the ability of magnetic fields to increase the nutritional values of various plants, which holds the promise of producing not only higher quantity but also higher quality plants.

The Mechanisms behind the Data

While many studies have tested the abilities of magnetic fields to improve plant growth and yield, few have investigated the causes of these strange phenomena. Many papers have noted increases in nutrient uptake in plants treated with magnetic fields [7, 8, 11]. Eşitken and Turan attributed magnetically treated plants’ selective uptake of positive ions over negative ions to the negative electric charge of plant cells [7, 16], which cause them to uptake positively charged ions. These results suggest that magnetic fields may increase the negative charge of plant cells, thereby increasing their uptake of positive ions, many of which are nutritional to plants and may help improve a plant’s growth and yield. Moon and Chung observed that external electric and magnetic fields influence ion activation and dipole polarization in living cells, providing a possible explanation for the increase in the plants’ nutrient uptake due to magnetic field exposure [11]. Another study proposed that the magnetic treatment of plants electromagnetically induced change in the electrostatic balance of plant systems at the cell membrane level, which is the main site of any inhibition or enhancement of plant growth [2]. It has also been proposed that magnetic fields could affect the transport of charged solutes into the cell by the activity of enzymes controlling the local extensions of plant cell [2]. The correlation between mineral uptake and magnetic fields’ positive effects on plants suggests that former may also be one of the factors responsible for the unique effects of magnetic fields on plants. It has been proposed that magnetically treated plants release organic compounds into the rhizosphere, the soil surrounding the plant, which may increase P and K desorption, the release of phosphorus and potassium through or from a surface; these elements would therefore become more available to

Figure

the plant, aiding plant growth [15]. Results from the same study have suggested that magnetically treated water also improved availability, uptake, assimilation, and mobilization of these nutrients within the plant system, providing a possible explanation for the treated plants’ increased water productivity when compared to the control [15]. Other evidence suggest that the positive effects of magnetic treatments on plants may result from a reduced rather than increased accumulation of certain minerals; Maheshwari and Grewal proposed that the magnetic treatment of water may inhibit the plants uptake of Na, thereby decreasing Na toxicity [15], which other studies have found to limit plant growth [17, 18,19].

Another possible explanation for ability of magnetic fields to enhance plant germination, growth, and yield is via the water relations within plants. One study found results suggesting that stationary magnetic fields change the mechanism of water uptake in lettuce seeds, allowing them to absorb more water [9]. Citing a paper in which increased osmotic pressure and therefore increased water uptake were suggested to stimulate cell growth rates [20], the researchers concluded that the correlation between water uptake and magnetic fields they found may be responsible for the increased, magnetically induced germination rates found in other studies [9]. Another study proposed that non-uniform magnetic fields may energetically excite one or more parameters of the cellular substratum, such as proteins and carbohydrates, or water within seeds; after these magnetically exposed seeds acquired water, the activation and production of enzymes and hormones would be enhanced due to initial stimulation from magnetic fields, which could lead to improved plant germination, growth, and yield [10]. Other studies found that magnetic treatments increase protein synthesis and content in plant cells [6, 8]. However, since little research has been done on the relationships between magnetic fields and water uptake, cellular substratum stimulation, and photosynthesis, these three plausible explanations for the effects of magnetic fields on plants must be more thoroughly researched to reach informed conclusions on the true causes of magnetic fields’ special effects on plants.

Several studies have also pointed to enzymes as possible factors in magnetic fields’ effects on plants by linking electric and magnetic field exposure to enzyme activity [8, 21]. Radhakrishnan and Kumari proposed that magnetic fields affect the photosynthetic enzyme Rubisco subunits, which are largely responsible for carbon fixation in plants and therefore lead to enhanced carbon fixation and growth [8]. Pulsed magnetic fields were also found to increase activity of catalase (an enzyme which catalyzes the decomposition of hydrogen peroxide to water and oxygen) in soybean seedlings, suggesting that magnetic field treatment leads to increased catalase activity and therefore greater decomposition of harmful reactive oxygen species; moreover, the formation of water may have in turn enhanced plant growth [8]. Nazar et al. concluded that further research should be conducted regarding whether the bio-

logical effects of electric and magnetic fields is field or cell specific [21]. Another study claims that increased growth and yield in plants exposed to electric or magnetic fields could be due to an increased activity of enzymes that decompose harmful reactive oxygen species [14]. Wang et al. additionally proposed that magnetic treatments increase lipid peroxidation of harmful reactive oxygen species and that increased enzyme activity due to electrostatic field treatment leads to greater lipid peroxidation, less damage to seedlings, and therefore more growth in seedlings [14]. Furthermore, enzyme activity may be determined by a factor previously discussed, ion accumulation: one study found that metal ions can inhibit acid phosphatase to varying degrees in corn and soybean roots, suggesting a link between ion concentration and enzyme activity [22]. This could mean that the increased ion concentrations caused by electric or magnetic field exposure may be responsible for increased or decreased enzyme activity.

Lin and Yotvat found that the effects of treating irrigation and drinking water depended on the type of water, water content, temperature, equipment, equipment location, and operational factors such as water volume, speed of flow, installation, maintenance, etc., [1] but provided no explanation for the increases in growth, yield, and nutrient content in magnetically treated crops and livestock observed in their study. A possible explanation for these results is that magnetic fields alter some element of water which makes them more functional within the plant system and probably affects plant growth at the cellular level [15].

Recent studies have found that magnetic fields do, in fact, change some chemical and physical characteristics of water [23, 24], and these effects have been observed to last long after the magnetic field is removed [25]. Researchers have discovered that magnetic fields increase the size of water molecule clusters bound by hydrogen bonds in liquid water [24]. This relates to findings that magnetic fields increase the strength of hydrogen bonds in water [26]; this correlates to a weakening of van der Waals forces in water, since the delicate balance between conflicting hydrogen bonding and non-hydrogen-bonding forces in water clusters means that stronger hydrogen bonding will accompany weaker van der Waals forces [26]. Researchers have proposed that magnetic fields create dampening forces which reduce the thermal motion of charges inherent in water, strengthening the hydrogen bonding between water molecules [27]. They also found that magnetic fields increase the rate of evaporation by decreasing the strength of van der Waals forces and increase water’s boiling point, supposedly due to increased hydrogen bonding [27]. By increasing the strength of certain bonds within water, magnetic fields could possibly affect the transpiration and uptake of magnetically treated water in plants, offering a possible explanation for the ability of magnetically treated water to increase water productivity. Strong magnetic fields have been found to enhance salt mobility as well [28], which could account for the greater ion concentra-

tions found by Maheshwari and Grewal that may be responsible for the effects of magnetic fields on plants [15]. Magnetic fields also have the ability to increase proton spin relaxation, which may quicken some proton-transfer dependent reactions [29], which may help explain the increased enzyme activity proposed by Wang et al. to help promote seedling growth [14].

Many studies concede that researchers still do not know enough about the mechanisms causing magnetic and electric fields to increase germination, growth, and yield in plants. Comprehensive studies must test a variety of possible factors to explain this behavior at a cellular level so that broader generalizations on the mechanisms of these phenomena can be reached. Once they are determined, we will be able to better understand if, when, and how electric and magnetic fields can improve plant germination, growth, and yield and determine ways to maximize these benefits.

Conclusion

Much like the observers of Galvini’s experiments, we are currently testing the effects of electromagnetic energy on a wide variety of organisms, seeing if this powerful force can increase the germination, growth, and ultimately yield of both plants and animals. Yet, similar to the men and women of Galvini’s day, we are still unsure of the exact science behind the radical results we see. If we truly want to unlock the potential of electric and magnetic waves in maximizing plant and livestock production, to break the “ideal bounds” Dr. Frankenstein referred to, we must first definitively determine the causes of these strange phenomena.

In conclusion, there remains much to be probed and discovered in the study of electric and magnetic fields’ effects on plants and livestock. First, we must determine the general causes of the peculiar behavior of plants exposed to magnetic or electric fields, such as their tendency toward increased germination, growth, yield, productivity, and nutrition. After establishing a firm and broad conceptual basis, which we now lack, we can then confirm or disprove claims that magnetic fields can increase productivity and nutrition and additionally determine how to manipulate the factors behind this amazing behavior to maximize the benefits of electric and magnetic fields on plants. From there, we can then commercialize this process for wide-scale agricultural applications which could meet the constantly growing demand for food in an ever expanding world.

References

[1] Lin, I.J., and J. Yotvat. 2002. Exposure of irrigation and drinking water to a magnetic field with controlled power and direction. Journal of Magnetism and Magnetic Materials 83: 525-526.

[2] Leelapriya, T., K.S. Dhilip, and P.V. Sanker Narayan. 2003. Effect of weak sinusoidal magnetic field on germination and yield of cotton (Gossypium spp.). Electromagnetic Biology and Medicine 22: 117-125.

[3] McCann, J., F. Dietrich, and C. Rafferty. 1998. The genotoxic potential of electric and magnetic fields: an update. Mutation Research/ Reviews in Mutation Research 411: 45-86.

[4] Savostin, P.W. 1930. Magnetic growth relations in plants. Planta 12, 327.

[5] Murphy, J.D. 1942. The influence of magnetic field on seed germination. Am. J. Bot. 29(Suppl.), 15.

[6] Novitsky, Y.I., G.V. Novitskaya, T.K. Kocheshkova, G.A. Nechiporenko, and M.V. Dobrovol’skii. 2000. Growth of green onions in a weak permanent magnetic field. Russian Journal of Plant Physiology 48: 709-715.

[7] Eşitken, A., and M. Turan. 2004. Alternating magnetic field effects on yield and plant nutrient element composition of strawberry (Fragari ananassa cv. Camarosa). ActaAgric. Scand., Sect. B, Soil and Plant Sci. 54: 134-139, 2004.

[8] Radhakrishnan, R. and B.D.R. Kumari. 2012. Pulsed magnetic field: A contemporary approach offers to enhance plant growth and yield of soybean. Plant Physiology and Biochemistry 51: 139-144.

[9] Reina, F.G., L.A. Pascual, and I.A. Fundora. 2001. Influence of a stationary magnetic field on water relations in lettuce seeds. Part II experimental results. Biolectromagnetics 22: 596-602.

[10] De Souza, A., D. Garcia, L. Sueiro, F. Gilart, E. Porras, and L. Licea. 2006. Pre-sowing magnetic treatments of tomato seeds increase the growth and yield of plants. Bioelectromagnetics 27: 247-257.

[11] Moon, J.D., and H.W. Chung. 2000. Acceleration of germination of tomato seed by applying AC electric and magnetic fields. Journal of Electrostatics 48: 103-114.

[12] Flórez, M., M.V. Carbonell, and E. Martínez. 2007. Exposure of maize seeds to stationary magnetic fields: effects on germination and early growth. Environmental and Experimental Botany 49: 68-75.

[13] Wang G., J. Huang, W. Gao, J. Li, R. Liao, and C.A. Jaleel. 2009a. Influence of high voltage electrostatic field (HVEF) on vigour of aged rice (Oryza sativa L.) seeds. Journal of Phytology 2009, 1: 397-403.

[14] Wang G., J. Huang, W. Gao, J. Lu, J. Li, R. Liao, C.A. Jaleel. 2009b. The effect of high-voltage electrostatic field (HVEF) on aged rice (Oryza sativa L.) sees vigor and lipid peroxidation of seedlings. Journal of Electrostatics 67: 749-764.

[15] Maheshwari, B.L., and H.S. Grewal. 2009. Magnetic treatment of irrigation water: its effects on vegetable crop yield and water productivity. Agricultural Water Management 96: 1229-1236.

[16] Marschner, H. 1995. Mineral nutrition of higher plants. Academic Press Limited. 24-28 Oval Road, Lon-

don NW1 7DX, 889 pp.

[17] François. L.E., T.J. Donovan, E.V. Maas, and S.M. Lesch. 1994. Time of salt stress affects growth and yield components of irrigated wheat. Agron. J. 86: 100-107.

[18] Munns, R. 2002. Comparative physiology of salt and water stress. Plant Cell Environ. 25: 239-250.

[19] Muranaka, S., K. Shimizu, and M. Kato. 2002. Ionic and oxmotic effects of salinity on single-leaf photosynthesis in two wheat cultivars with different drought tolerance. Photosynthetica 40: 201-207.

[20] Cosgrove D. 1993. Water uptake by growing cells an assessment of the controlling roles of wall relaxation, solute uptake, and hydraulic conductance. Int. J. Plant Sci. 154: 10-21.

[21] Nazar, A.S.M.I., A. Paul, and S.K. Dutta. 1996. Frequency-dependent alteration of enolase activity by ELF fields. Bioelectrochemistry and Bioenergetics 39: 259-262.

[22] Juma, N.G., and M.A. Tabatabai. 1988. Phosphatase activity in corn and soybean roots: conditions for assay and effects of metals. Plant and Soil 107: 39-47.

[23] Pang, X.F. and B. Deng. 2008. Investigation of changes in properties of water under the action of a magnetic field. Science in China Series G-Physics Mechanics Astron. 51: 1621-1632.

[24] Cai, R., H. Yang, J. He, W. Zhu. 2009. The effects of magnetic fields on water molecular hydrogen bonds. J. Mol. Struct. 938: 15-19.

[25] Pang, X. and B. Deng. 2010. Infrared absorption spectra of pure and magnetized water at elevated temperature. Europhys. Lett. 92: 65001.

[26] Hosoda, H., H. Mori, N. Sogoshi, A. Nagasawa, and S. Nakabayashi. 2004. Refractive indices of water and aqueous electrolyte solutions under high magnetic fields. J. Phys. Chem. A. 108: 1461-1464

[27] Inaba, H., T. Saitou, K. Tozaki, and H. Hayashi. 2004. Effect of the magnetic field on the melting transition of H2O and D2O measured by a high resolution and supersensitive differential scanning calorimeter. J. Appl. Phys. 96: 6127-6132.

[28] Chang, K.-T. and C.-I. Weng. 2008. An investigation into the structure of aqueous NaCl electrolyte solutions under magnetic fields. Comput. Mat. Sci. 43: 1048-1055.

[29] Madsen, H.E.L. 2004. Crystallization of calcium carbonate in magnetic field in ordinary and heavy water. J. Cryst. Growth 267: 251-255.

Halobacterium: Mechanisms of Extreme Survival as a Solution to Waste

Introduction

Halobacteria is a class of archaebacteria that thrive in harsh environments with a unique capability to survive in hypersaline environments. Research conducted today shows that halobacteria diplay unique environmental response capabilities not only to high concentrations of salt, but also to desiccation, gamma irradiation, oxidative stress (the scarcity or overabundance of oxygen species such as O2 and H2O2), and microgravity [1,2,3]. In short, halobacteria are unique archaebacteria whose characteristic ability to survive in extreme environments is unlike that seen in most other microorganisms. This review aims to describe physiological mechanisms that halobacteria utilize survive and how these responses may provide solutions in fields such as waste management.

Environmental Stress Response Characterization

Halobacteria can survive in many hypersaline environments. However, when it comes to environments of oxidative stress, microgravity, or gamma irradiation, little is known about their phenotypic response. The following section gives insight into the characteristic responses when exposed to certain extreme conditions.

Desiccation

Given the halobacteria’s known response to hypersaline systems, Kottemann et al. explored whether the wild-type halobacteria strain NRC-1 would respond in a similar way to desiccation, and the bacteria were, indeed, able to withstand high levels of desiccation. When placed in desiccation for twenty days, there was a 25% survival of viable cells with an almost full DNA recovery in two days [1]. Therefore, Kottemann concluded that this ability to survive high levels of dryness occurred because of its capability to quickly repair double-stranded DNA breaks.

These results confirm the conclusions of Malcolm Potts, who suggested that halobacteria must either adapt to desiccation or already possess the capability to resist its harmful effects. NRC-1 colonies took refuge within salt crystals in order to protect themselves against desiccation [1]. This behavior, typical of haloarchaea, has even been observed in the viable mitochondrial DNA of haloarchaea found in 60,000 year old bones of deceased Aboriginals, demonstrating the effectiveness of this response [4].

Kixmuller et al. offers an explanation for halobacteria survival within halite crystal deposits. Halobacteria thrive in environments with high concentrations of potassium

(K+) ions. However, desiccated environments, such as deserts, do not maintain very high K+ ion concentrations. Their survival is possible because of the kdpFABCQ operon, which regulates the expression of a gene that codes for the production of K+ ions, allowing the survival of halobacteria [5]. In non-desiccated regions, the function of this operon is not necessary. A knockout strain of the wild-type halobacteria, whose kdpFABCQ operon was nonfunctional, was exposed to a desiccated environment. The strain yielded a viable cell count 110 times less than that of the wild-type at the end of the desiccation period, confirming that in order for halobacteria to survive within halite crystals, the kdpFABCQ operon must be completely functional in order to create enough K+ ions for cell survival.

Gamma Irradiation

Kottemann et al. exposed halobacteria wild-type NRC-1 to high levels of gamma irradiation (up to 7.5 kGy, many times more than humans can withstand). In the presence of high gamma radiation, as also seen with desiccation exposure, wild-type NRC-1 experienced double-stranded breaks in their DNA, which were repaired after 48 hours of exposure. When subjected to the highest level of gamma radiation (7.5 kGy), NRC-1 did not yield a viable cell count. However, even with medium levels of gamma radiation exposure (2.5-4 kGy), NRC-1 managed to show resilience and maintained a 25% viable cell count [1]. From these results, Kottemann concluded that desiccation resistance and gamma irradiation resistance are related since NRC-1 can repair breaks in DNA caused by both stressors very efficiently. Furthermore, the natural pigmentation of NRC-1 and its habit of hiding within salt crystals, as seen previously, allows the bacteria extra protection from gamma irradiation. This experiment offers insight into resistance of halobacteria to high levels of gamma irradiation.

Oxidative Stress

Oxidative stress, or an overabundance of toxic oxygen species, is another extreme environment in which halobacteria have been demonstrated to survive. Sharma et al. conducted an experiment on halobacteria wild-type NRC1, dealing with the transcription factor VNG0258H. As the concentration of reactive oxygen species within the surrounding environment increased, the expression of VNG0258H increased, allowing NRC-1 to survive oxidative stress. When the concentration of reactive oxygen spe-

cies was decreased, the level of expression of VNG0258H decreased as well, as is illustrated in Figure 1 [2].

Figure 1. This figure shows the relationship between oxygen species level over time and the expression of VNG0258H and aerobic and anaerobic genes.

In both cases of high oxygen levels, VNG0258H had the highest levels of regulatory gene expression. From this, Sharma concluded that there was a relationship between VNG258H expression and NRC-1 resistance to oxidative stress. For further confirmation, Sharma created a knockout strain of NRC-1, without functioning VNG0258H transcription factors, and placed it in varying levels of oxidative species concentration. As reactive oxygen species concentration increased, the survival of the knockout bacteria decreased, confirming the necessity of VNG0258H in the regulation of oxidative stress.

Microgravity

Dronmayr-Pfaffenhuemer et al. explored the response of halobacteria to simulated microgravity (gravitational force that is 100 times weaker than that of Earth’s) [3]. The survival of of the halobacteria species, Haloferax mediterranei, was tested in simulated microgravity with exposure to antibiotics. When placed in simulated microgravity, Haloferax mediterranei maintained a reasonable cell density after 6 days of exposure to the antibiotics. However, when exposed to normal Earth gravity, Haloferax mediterranei was only able to survive for a maximum of approximately 48 hours when exposed to the antibiotics. Therefore, Dronmayr-Pfaffenhuemer concluded that, when subjected to microgravity, the resistance of halobacteria to antibiotics and other environmental stresses increases.

Waste Management

In the future, humans could take advantage of the ability of halobacteria to survive extreme environments, particularly in their waste removal capabilities. Amoozegar et al. explored the application of the response of a particular haloarchaea strain to toxic chromium waste [6]. Chromium creates very toxic saline waste, and since the waste has a high salt concentration, it is ideal for haloarchaea

growth. Researchers found that 1.5 M NaCl, 35° C, and a pH of 8.0 were the optimal conditions for haloarchaea in the removal of chromium waste, for it was at these levels that the maximum chromate removal was achieved, resulting in a final concentration of chromate ions well below 0.04 mM. Therefore, halophiles are very applicable to the field of biohazard waste management, especially in regards to chromium waste.

Conclusion

In conclusion, halobacteria are very complex organisms that are able to survive a wide range of environmental stressors, such as desiccation, high gamma irradiation, and microgravity. These versatile organisms have a variety of mechanisms with which it adapts to stressful biological environments. Not only can these bacteria withstand extreme conditions, but their response mechanisms could also allow for their use in waste management. Further research includes the application of halobacteria to desiccated environments characterized by subzero temperatures, as well as environments not present on Earth, as seen on other planets. It is likely that other unique biological responses of halobacteria will be discovered and, therefore, may provide an untapped natural resource that could be put to work to benefit our society and environment.

References

[

1] Kotteman, M., Kish, A., Iloanusi, C., Bjork, S., & DiRuggiero, J. (2005). Physiological responses of the halophilic archaeon halobacterium sp. strain nrc1 to desiccation and gamma irradiation.Extremophiles, 9(3), 219-227.

[2] Sharma K, Gillum N, Boyd JL, Schmid AK. (2012). The RosR transcription factor is required for gene expression dynamics in response to extreme oxidative stress in a hypersaline-adapted archaeon. BMC Genomics, 13,351-367.

[3] Dornmayr-Pfaffenhuemer M, Legat A, Schwimbersky K, Fendrihan S, Stan-Lotter H. (2011). Responses of Haloarchaea to Simulated Microgravity. Astrobiology, 11(3), 199–205.

[4] Potts M. (2001). Dessication tolerance: a simple process?. TRENDS in Microbiology, 9(11), 553-559.

[5] Kixmuller D, Greie JG. (2012). An ATP-driven potassium pimp promotes long-term survival of Halobacterium salinarum within salt crystals. Environmental Microbiology Reports, 4(2), 234, 241.

[6] Amoozegar MA, Ghasemi A, Razavi MR. (2007). Evaluation of hexavalent chromium reduction by chromateresistantmoderately halophile.

Alzheimer’s Disease: Current Therapies and Emerging Research

Introduction

The greatest challenge in the development of effective treatments for the neurodegenerative disorder Alzheimer’s Disease (AD) is the lack of scientific consensus on its cause. Numerous hypotheses and mechanisms have been proposed, but no current hypothesis explains all observed symptoms of this disease. AD is clinically characterized by the rapid loss of cognitive ability and memory. Anatomically, AD begins with the appearance of extracellular deposits of insoluble amyloid-β protein (Aβ), known as senile plaques, and the formation of neurofibrillary tangles (NFTs). Traditionally, two clinically similar forms of AD have been described: Familial Alzheimer’s Diseases (FAD), a hereditary form of AD, and Sporadic Alzheimer’s Disease (SAD), which can develop in individuals with no family history of AD. While numerous hypotheses have been posited, the mechanism by which Alzheimer’s Disease develops is still unknown [1]. It is also uncertain whether the initial causes of FAD and SAD are different. Furthermore, although Aβ plaques and NFTs are always found in AD brain, Aβ and NFTs may only represent a final product of progressive neurodegeneration due to AD, rather than a cause [2]. Despite decades of research, there remains much to be discovered about this mysterious disease.

Two Classic Hypotheses

Traditionally, two main causal mechanisms for the development and progression of AD have been proposed. First, the Aβ “cascade” hypothesis proposes that excess production of amyloid-β, a small fibrillar peptide, leads to the accumulation of extracellular senile plaques in the spaces around synapses, which in turn lead to neurodegeneration and apoptosis [1]. Aβ is formed by the cleavage of amyloid precursor protein (APP), which is encoded by a gene located on the twenty-first chromosome. The released amyloid peptide then travels to the extracellular spaces and forms Aβ plaques. Current therapies for AD utilize a variety of mechanisms, but the majority of treatments aim to decrease amyloid production. However, the classical Aβ “cascade” hypothesis has come under scrutiny as Aβ deposits do not correlate with clinical symptoms, and Aβ plaques have been found in the brains of individuals without AD [4]. Unlike insoluble Aβ deposition, soluble Aβ concentration does correlate with cognitive impairment. Recent research indicates that the soluble Aβ oligomers, which are comprised of protofibril Aβ and Aβ-derived diffusible ligands (ADDL), are also

toxic [2]. Aβ oligomers are hypothesized to contribute to suppression of long-term potentiation, the strengthening of synapses between cells in memory recall, and may be the major cause of synaptic dysfunction during early stages of AD [1,4]. ADDLs bind to receptors on neurons, thereby changing the structure of synapses and disrupting neural communication [3]. Protofibrils, soluble intermediates found in the process of amyloid fibril formation, may contribute to neuronal death later in the progression of AD [3]. Moreover, evidence has shown that N-APP, a relative of the Aβ protein, may be more significant in neural degeneration than the Aβ protein. N-APP, a fragment of APP from its N-terminus, is cleaved from APP by one of the same enzymes that cleaves Aβ. N-APP causes apoptosis by binding to a cell site that induces cell death [3].

The other classical hypothesis centers on the hyperphosphorylation of Tau, a microtubule-associated protein that stabilizes nerve cells’ structures. Tau hyperphosphorylation is thought to cause it to dissociate from microtubules and accumulate in intracellular neurofibrillary tangles (NFTs) [5,6]. When abnormally phosphorylated, Tau reduces its affinity for and dissociates further from microtubules, accumulated in the neuronal perikarya and processed as paired helical filaments (PHF) [7]. The abnormal NFTs lead to a loss of dendritic microtubules and synapses, membrane degeneration, and ultimately cell death. In addition, hyperphosphorylated Tau also sequesters normal Tau molecules into the aggregates, which in turn have a negative impact on the normal microtubule function. Recent research supports that only the soluble, oligomeric forms of Tau are pathogenic, a result similar to the one found in the Aβ hypothesis[5]. Mutations that increase the risk of early-onset AD and tau hyperphosphorylation are colocalized with genes linked to Aβ plaque production. These genes encode for APP and for membrane- spanning proteins presenilin-1 (PS-1) and PS-2 that process APP. Therefore, it is postulated that either the Tau gene mutation or the accumulation of Aβ plaques can trigger the accumulation of hyperphosphorylated Tau protein [6].

Alternative Pathways and Mechanisms

Though the two classical hypotheses described above have dominated AD research for the past quarter century, new research has revealed that a variety of biological mechanisms may be implicated in this disease.

Glycogen Synthase Kinase 3

Despite its name, the serine/threonine kinase, known as glycogen synthase kinase 3 (GSK 3), has important functions outside of glycogen synthesis, and is known to be crucial to mechanisms as varied as Wnt signaling, apoptosis, cell development and differentiation, metabolic homeostasis, inflammation, and cell polarity. GSK 3 has been linked to an incredible variety of neurodegenerative diseases [8]. Jope et al. have proposed that inflammation is the causal link between neurodegenerative diseases and GSK 3 as GSK 3 promotes the migration of pro-inflammatory cells and the infiltration of inflammatory molecules into the brain.

Triggering Receptor Expressed on Myeloid Cells 2 (TREM2)

A rare missense mutation in the gene encoding for TREM2 has been found to interfere with the brain’s ability to prevent the buildup of plaque and is linked to AD. Under normal conditions, the TREM2 gene allows white blood cells in the brain to eliminate the plaque- forming protein Aβ. However, the mutated TREM2 gene reduces white blood cells’ effectiveness in attacking Aβ. People with the mutated gene have five times as much of a risk of developing AD as they age. In a study of genetic data from around the world, this mutation occurred in 0.5 to 1% of the general population, but in 1 to 2% of patients with AD [11]. This discovery reconsiders the previously ignored inflammation of the brain in AD patients and highlights the role of the immune system in the disease [11].

Translocase of the Outer Mitochondrial Membrane, 40 kD (TOMM40)

TOMM40, a recently identified risk gene for AD on the 19th chromosome, encodes the essential mitochondrial protein import translocase and is adjacent to and in linkage disequilibrium with the apolipoprotein (APOE) gene. Of the three alleles (short, long, and very long), the very long allele is associated with impaired verbal memory recall. This same impairment is seen in APOE ε3/ε4 subjects with a family history of AD [12]. Although APOE ε 4 and TOMM40 are associated with each other, recent research indicates that APOE ε 4 and TOMM40 influence age-related memory but do so independently of each other. TOMM40 has a significant effect only before the age of 60, while APOE ε3/ε4 only has a significant effect after the age of 60 [13].

Current Therapeutic Approaches

As AD has no known cure, current treatment is generally centered on maintaining quality-of-life. Treatments for AD often focus on symptoms; conventional treatments for depression, anxiety, and psychosis are used as they would be in non-AD patients. There are currently two major classes of medication that directly address AD, cholinesterase inhibitors and glutamate inhibitors [14,15].

Cholinesterase (ACh) inhibitors

Cholinesterase inhibitors, the first class of AD medication developed, attempt to increase cognitive ability by increasing levels of the neurotransmitter Ach [15]. While the exact mechanism of each drug varies, all drugs in this class work by reducing the effectiveness of acetylcholinesterase (AChE), the enzyme responsible for the breakdown of ACh. Elevated ACh levels temporarily increase the ability of neurons to transfer signals to other neurons, thereby increasing cognition. This class of medication has long been considered a first line of action against mild to moderate AD. However, cholinesterase inhibitors have severe shortcomings. The effects of cholinesterase inhibitors are short-term, and many patients do not respond to this therapy [16]. Furthermore, these drugs have no effect on the observed anatomy of the disease; Aβ plaques and NFTs remain unchanged.

Glutamate Inhibitors

The second category of drugs available for AD also attempts to increase neurotransmission, but instead targets glutamate, another neurotransmitter [17]. Keltner and Williams note that sustained glutamate signaling has been linked to cognitive decline and neuronal death through excitotoxicity, a mechanism in which hyperactivity of an enzyme causes cell damage and death. Currently, only one drug in this class, memantine hydrochloride, has been FDA-approved for AD treatment. Memantine HCl has been approved for mild to moderate AD and reduces abnormal, sustained signaling of glutamate while leaving normal glutamate action unaffected. However, there is insufficient clinical data to confirm whether memantine hydrochloride will have long-term effects on neurodegeneration. Furthermore, like cholinesterase inhibitors, memantine HCl does not affect NFTs or Aβ plaques [18].

Medicines in Development

In addition to these existing classes of medication, Niikura et al. investigated proposed therapeutic options based on the Aβ hypothesis [3]. These approaches focus on the removal of Aβ plaques. Possible mechanisms of Aβ removal include suppression of the secretases responsible for the production of Aβ from APP, accelerating the rate of natural Aβ degradation by enzymes in the brain, and immunization against Aβ. However, methods to increase the rate of Aβ removal are still in their infancy, and immunization against Aβ in human trials resulted in a significant inflammatory response.

Niikura et al. reported an alternative method to combat AD-related neurodegeneration. Using Humanin, a novel neuroprotective compound, Niikura demonstrated that neurons can be successfully protected from the damaging effects of Aβ, and hypothesized that sufficient neuroprotection, combined with some level of Aβ removal, can avert neuronal death entirely.

Computational Models and Advancement in AD Research

Recently, developments in computational neuroscience have provided a way to integrate the many factors influencing the progression of AD, including the relative contribution of cell death, slowing of conduction velocities, and normal aging processes, into a complex system that mediates the interaction between the proposed mechanisms [19].

In 1994, Alvarez and Squire proposed a model of the role of key neural regions involved in AD, the hippocampus and neocortex [20]. They assumed learning between the two occurs quickly, whereas forgetting occurs at a moderate rate. In addition, intra-neocortical learning and forgetting occur slowly. This model showed the hippocampus slowly teaching the neocortex, and when lesioned, the model simulates the response of AD brain. The advantage of this model is that it shows associations learned in early cycles, but a disadvantage is that it does not perform well on memories learned in later cycles. The model can also be used to determine how much information is lost between neocortical regions and between the neocortex and hippocampus [21].

More recently, Glaw and Skalak have developed a model to test the hypothesis that GSK-3 provides a possible link between Aβ buildup and NFT development [21]. This model found that GSK-3 had a large effect on NFT formation, but very little on plaque formation, with no link found between A plaques and NFTs [22].

Computational models continue to be improved and reused to further analyze data. For example, a 1995 model by Ruppin and Reggia showed how lesions in a neural network lead memory loss, and adding a local compensation factor causes a pattern of functional damage similar to that found in AD. Rowan enhanced this model with techniques more representative of current knowledge of the disease. In Rowan’s model, the high density of local connections leads to synaptic redundancy and increased protection against damage [22]. This model showed that by silencing the output of selected neurons to simulate the effects of axonal binding blocked by NFTs, initial retrieval of remote memories is more reliable than retrieval of recent memories at early stages of damage. If the brain continues to make use of this effect and uses the more readily available remote memories, the recently-stored memories continue to become less reliable and recall performance for recent patterns decreases. This result is similar to that seen in clinical studies [22].

Conclusion

The true cause of Alzheimer’s Disease continues to be an enigma, but recent research has elucidated many possible avenues for the development of new therapies. The classical amyloid-β and Tau hypotheses have been insufficient to explain the complexities of this disease; however, newer mechanisms, like GSK3, and genetic targets,

like TOMM40, provide new insights. Furthermore, new computational models may provide the key to developing effective treatments.

References

[1] Hooper C, Killick R, Lovestone S. The GSK3 hypothesis of Alzheimer’s disease. Journal of neurochemistry. 2008 Mar;104(6):1433–9.

[2] Hernández F, Avila J. The role of glycogen synthase kinase 3 in the early stages of Alzheimers’ disease. FEBS letters. 2008 Nov 26;582(28):3848–54.

[3] Niikura T, Tajima H, Kita Y. Neuronal cell death in Alzheimer’s disease and a neuroprotective factor, humanin. Current neuropharmacology. 2006;139–47.

[4] Avila J, Medina M. The Role of Glycogen Synthase Kinase-3 ( GSK-3 ) in Alzheimer ’ s Disease. In: De La Monte S, editor. Alzheimer’s Disease Pathogenesis-Core Concepts, Shifting Paradigms and Therapeutic Targets. INTECH; 2011. p. 197–210.

[5] Maccioni RB, Farías G, Morales I, Navarrete L. The revitalized tau hypothesis on Alzheimer’s disease. Archives of medical research. 2010 Apr;41(3):226–31.

[6] Brich J, Shie F-S, Howell BW, et al. Genetic modulation of tau phosphorylation in the mouse. The Journal of neuroscience : the official journal of the Society for Neuroscience. 2003 Jan 1;23(1):187–92.

[7] Takashima A. GSK-3 is essential in the pathogenesis of Alzheimer’s disease. Journal of Alzheimer’s Disease. 2006;9:309–17.

[8] Jope RS, Yuskaitis CJ, Beurel E. Glycogen synthase kinase-3 (GSK3): inflammation, diseases, and therapeutics. Neurochemical research. 2007;32(4-5):577–95.

[9] Hur E-M, Zhou F-Q. GSK3 signalling in neural development. Nature reviews Neuroscience. Nature Publishing Group; 2010 Aug;11(8):539–51.

[10] Zafra D, Corominola H, Domı J, Gomis R, Guinovart JJ. Sodium Tungstate Decreases the Phosphorylation of Tau Through GSK3 Inactivation. Journal of Neuroscience Research. 2006;273(October 2005):264–73.

[11] Jonsson T, Stefansson H, Ph.D. SS, et al. Variant of TREM2 Associated with the Risk of Alzheimer’s Disease. New England Journal of Medicine. 2012 Nov 14;107–16.

[12] De Strooper B. Loss-of-function presenilin mutations in Alzheimer disease. Talking Point on the role of presenilin mutations in Alzheimer disease. EMBO reports. 2007 Feb;8(2):141–6.

[13] Caselli RJ, Dueck AC, Huentelman MJ, et al. Longitudinal modeling of cognitive aging and the TOMM40 effect. Alzheimer’s & dementia : the journal of the Alzheimer’s Association. Elsevier Ltd; 2012 Nov;8(6):490–5.

[14] Birks J. Cholinesterase inhibitors for Alzheimer’s disease (Review). 2012.

[15] Thacker PD. Surprising discovery with Alzheimer’s Medication. Drug Discovery Today. 2003;8(9):379–80.

[16] Pepeu G, Giovannini MG. Cholinesterase inhibitors and memory. Chemico-biological interactions. Elsevier Ireland Ltd; 2010 Sep 6;187(1-3):403–8.

[17] Keltner NL, Williams B. Biological Perspectives Memantine : A New Approach to Alzheimer ’ s Disease. Perspectives in Psychiatric Care. 2003;10(3):4–5.

[18] Mark LP, Prost RW, Ulmer JL, et al. Pictorial review of glutamate excitotoxicity: fundamental concepts for neuroimaging. AJNR American journal of neuroradiology. 2001;22(10):1813–24.

[19] Jedynak BM, Lang A, Liu B, et al. A computational neurodegenerative disease progression score: Method and results with the Alzheimer’s disease neuroimaging initiative cohort. NeuroImage. Elsevier Inc.; 2012 Nov 15;63(3):1478–86.

[20] Alvarez P, Squire LR. Memory consolidation and the medial temporal lobe: a simple network model. Proceedings of the National Academy of Sciences of the United States of America. 1994 Jul 19;91(15):7041–5.

[21] Crystal H, Finkel L. Computational approaches to neurological disease. World Scientific. 1996.

[22] Rowan M. Effects of Compensation, Connectivity and Tau in a Computational Model of Alzheimer’s Disease. International Joint Conference on Neural Networks. 2011 Jun 30;1–8.

Intervertebral Discs and Their Interactions with Different Environments

Introduction

Back pain and joint pain from cartilage degeneration are major problems in the world. More than 80 percent of adult population suffers from back pain at some time in their lives [1]. Majority of the pain is due to degeneration of the cartilage that forms intervertebral discs (IVD). IVD’s are located between vertebral bodies and serve three major functions: acting as a ligament to hold the vertebrae of the spine together, absorbing shock, and enabling the spine to rotate and bend [2]. They can wear down from overuse, injury and aging. Because of the pain’s debilitating effects, there have been many attempts to fix the problem, without much success.

One method to fix the problem is to replace the damaged IVD, but IVD’s have numerous functions and complex properties that are difficult to imitate. A replacement tissue must be able to distribute the load evenly, resist compression, have viscoelastic property, and provide smooth surface for pivoting. If the replacement tissue cannot fulfill even one of these properties, the patient cannot function fully. Many have turned to fusing vertebrae together with prosthesis, while other patients, with smaller lesions, have attempted to regenerate the cartilage.

There are a few ways people have employed to restore damaged cartilages. A popular restoration method is by implanting replacement tissue grafts. The patient can either replace the damaged cartilage with small sections from a less weight bearing joint or with a full allograft. This replacement treatment has shown to decrease pain in 70 percent of the patients for two to five years [2]; however there are few problems associated with this treatment. The replaced cartilage does not last long, so the patient must replace it continuously to avoid pain. Furthermore, allografts often induce immune response, which can be dangerous. A big concern in the U.S. is that these tissue grafts can break down and cause osteolysis [2].

To eliminate the need for donor sites, many have tried to heal or regenerate existing cartilage through natural processes. These processes focused on either enhancing the environment for regeneration or transplanting chondrocytes to form more tissue. These techniques have not shown completely successful results, and more so in older populations [1]. The reason for such poor results in regeneration methods is because cartilage is an avascular tissue, which means that nutrients transport and waste removal are much more complicated processes that rely solely on diffusion.

Another common treatment is by exciting the cartilage

through physical, energy, or pharmacological stimulation. Physical stimulation involves penetrating the subchondral bone through abrasion or drilling. The stimulation creates a full-thickness defect [1] which causes a clot to form and provides a scaffold that allows mesenchymal stem cells (MSC) to migrate. Even though this treatment is very common, the results have been mixed, due to random differentiation of MSC into different cartilage cells or, sometimes, not even a cartilage cell. Moreover, the new tissue has mechanical properties and durability that are less than that of the original tissue. Both energy stimulation and pharmacological stimulations have also shown ambiguous results, requiring further research.

An important aspect of further research that is required to engineer a satisfactory functional scaffold or tissue is to understand how a cell knows and interacts with matrix. We currently know that cells react to its environment through physical senses, or phenotypic responses. Knowing the specific interactions of IVD cells with matrix components could yield more information on the degenerative process and provide novel methods for repair.

Structure of IVDs

IVD is a cartilage and a joint that allows flexible motion and absorbs shock. It holds the vertebrae together and limits excessive motion. An IVD is composed of 3 main parts: nucleus pulposus (NP), annulus fibrosus (AF), and vertebral endplate (VEP). NP is a gelatinous structure that contains hydrophilic proteoglycan (PG) and glycosaminoglycan (GAG) chains. These chains are negatively charged, and they maintain a large amount of water in the IVD. Main role of NP is to support the load and distribute the weight evenly. AF is a concentric, multilayered structure with a regular pattern of collagen type I fibers. AF surrounds NP and supports it by preventing NP from deforming when compressed. VEP is positioned on top and bottom of each IVD. It allows nutrients and waste to travel across it by diffusion. Because lower IVDs are so big and the rate of diffusion for nutrients is slow, lower IVDs are at much greater risk of degeneration.

Cell-substrate interaction

Usually, tissue cells need to adhere to a solid to be viable; hence, they are called anchorage dependent. Yet, we do not know how tissue cells are able to distinguish the stiffness of different matrices. It is hypothesized that cells anchor and pull on their surroundings to determine the stiffness [3]. These processes partly rely on myosin-based

Jin Yoon

Figure 1. MRI of IVD showing NP and AF in distinct regions (left). Schematic of spinal column (middle). Anatomy of normal disc with histological stain (right) [2].

contractility and transcellular adhesions to apply forces to substrates. However, tissue cells not only apply forces, but also respond to the resistance by the substrate through cytoskeleton organization [3]. It has been found that cellcell contact promotes the cells to have indistinguishable morphologies, while cells on stiff surface differ in their spreading and cytoskeletal organization. This research is done in order to try to understand how cells know to exert greater contractile traction forces on stiffer substrates.

Cell-matrix interaction

It is crucial for proper interaction to exist between a cell and its extracellular matrix (ECM) because the interaction is a key factor in regulating cell survival, differentiation, and response to environmental stimuli [4]. Integrin receptors on cell surface link cells to their ECM and are responsible for aforementioned functions. It was found that stained NP cells tested highly positive for laminin, while AF cells had minimal attachment to laminin. This signifies that NP cells readily attach to laminin substrates.

In mesenchymal stem cells (MSC), it has been found that the elasticity of the matrix can specify lineage of the cells [7]. Using crosslinking of collagen-I, tissues with a range of matrix elasticity values were created. Elasticity is measured by elastic constant, E, which is the resistance that a cell feels when it deforms the ECM. It was found that MSCs on soft substrates (E of 0.1 -1 kPa) branched and spread, and their branching density approached those of primary neurons under similar conditions. MSCs on stiffer substrates (E of 8-17 kPa) became spindle-shaped similar to myoblasts. Finally, on very stiff substrates (E of 25-40 kPa), which mimic the crosslinked collagen of osteoids, MSCs’ morphology was similar to that of osteoblasts. Besides the elasticity of the matrix, it was found that non-

muscle myosin II (NMM II)—which is predicted to exert force on the substrate through focal adhesions to sense the matrix elasticity—plays a role in lineage specification. The researchers have found that when MSCs are treated with blebbistatin, they are prevented from differentiating. Blebbistatin is a selective and potent myosin inhibitor that inhibits actin activation of NMM II ATPase activity and blocks migration and cytokinesis in vertebrate cells [7]. Adding blebbistatin during plating MSCs can prevent the cells from branching or spreading; however, if bleb is added 24 hours after plating of the MSCs, no significant changes are observed. From this research, it is clear that matrix elasticity needs to be optimized for regeneration.

Cellular responses to load

IVD cells are made to withstand pressure. However, it has been found that these cells respond differently based on the type, duration and magnitude of the load [6]. Different loads can cause IVD cells to exhibit either anabolic or catabolic responses. Usually, low to moderate magnitudes of compression or pressure causes an increase in anabolic cell responses; contrarily, high magnitude increases catabolic cell responses. It has been observed that a range of magnitudes or frequency exists for each condition in a cell type that promotes biosynthesis. This means that there is a physiological range of stimuli that can promote maximum biosynthesis and cell-mediated repair. Furthermore, inner AF and NP showed similar responses to different loads, while outer AF showed a different response; having similar responses suggest that they experience similar stimuli and may respond similarly than cells of the outer AF.

Mechanosensing

A study aligning with our goal has been previously

done on cardiac cells. Cell-to-cell interactions are very important for the cardiac cells to function properly. Many studies have looked at cell-ECM interactions, but we do not know much about cell-to-cell mechanosensitivity and mechanotransduction. Supporting previous conjectures, results from related studies have shown that substrate stiffness does have a significant effect on cell shape, myofibrillar maturation, and expression of specific transcription factors, especially at a certain, optimum stiffness.

There has been a study done trying to find the role of intercellular adhesions; they looked into N-cadherin-mediated mechanotransduction on morphology and internal organization of cardiac myocytes [8]. N-cadherin (neuronal calcium-dependent adhesion) is a type-1 transmembrane protein, cell-based subadhesion system that plays a vital role in cell adhesions. Mechanotransduction is any mechanism by which a cell converts mechanical stimulus into chemical activity. They found that disturbing the assembly of actin with cytochalasin D inhibits cadherinmediated adhesions.

Another finding from this experiment was that the cell-spreading area of myocytes was also dependent on the stiffness of the substrate. Elasticity, or the stiffness of the substrate, was measured using an atomic force microscope. Cells grown on soft substrates did not spread as much as cells grown on stiffer substrates as shown in figure 2. It can be deduced from this lab that in addition to ECMmediated forces, cell-to-cell mediated forces have great effect in cell morphology, adhesion, and spread area.

Conclusion

There have been many attempts to correct damaged IVDs, but they have all had drawbacks. If we understand how and why IVD’s become degenerated, we can come up with a solution. So to understand the process behind regeneration, researchers have made investigations as to how cells behave and react given an outside force, but many more interactions to be researched. Furthermore, we are looking into ways in which cadherins can change

Figure 2. Neonatal ventricular rat myocytes plated on gels of varying stiffness (A-F). G (top of page) is a comparative bar graph of cell-spreading area on extracellular matrices of varying stiffness [8].

downstream signaling cascades. Cadherins are important because these protein molecules help cells adhere to one another. Without cadherins, cells would not be able to function together and accomplish their goals. At this point, we need to understand how the changes in the substrate can affect cell responses. Elucidating this process can help us find the mechanical signaling targets and perhaps reverse the degeneration of IVD cells.

References

[1] Johnna S Temenoff, Antonios G Mikos, Review: tissue engineering for regeneration of articular cartilage, Biomaterials, Volume 21, Issue 5, March 2000, Pages 431-440, ISSN 0142-9612, 10.1016/S0142-9612(99)00213-6.

[2] Benjamin R. Whatley, Xuejun Wen, Intervertebral disc (IVD): Structure, degeneration, repair and regeneration, Materials Science and Engineering: C, Volume 32, Issue 2, 1 March 2012, Pages 61-77, ISSN 0928-4931, 10.1016/j.msec.2011.10.011.

[3] Discher, Dennis E, Paul Janmey, and Yu-Li Wang. “Tissue cells feel and respond to the stiffness of their substrate.” Science 310.5751 (2005) : 1139-1143.

[4] Gilchrist, C. L., et al. “Functional Integrin Subunits Regulating Cell-Matrix Interactions in the Intervertebral Disc.” J Orthop Res 25.6 (2007): 829-40. NLM.

[5] C.L. Gilchrist, A.T. Francisco, G.E. Plopper, J. Chen, L.A. Setton. Eur Cell Mater. Author manuscript; available in PMC 2012 April 22. Published in final edited form as: Eur Cell Mater. 2011 June 20; 21: 523–532.

[6] Setton LA, Chen J, 2004, Intervertebral disc cell mechanics and biological responses to load. Current Opinion in Orthopedics. 15(5):331-340, October 2004.

[7] Adam J. Engler, Shamik Sen, H. Lee Sweeney, Dennis E. Discher. Matrix elasticity directs stem cell lineage specification. Cell. 2006 August 25; 126(4): 677–689. doi: 10.1016/j.cell.2006.06.044.

[8] Chopra A, Tabdanov E, Patel H, Janmey PA, Kresh JY. Cardiac myocyte remodeling mediated by N-cadherindependent mechanosensing. Am J Physiol Heart Circ Physiol. 2011 April. 200(4):H1252-66.

Effect of Backpack Load on Gait Parameters

Introduction

Most high school students carry huge backpacks that heavily influence their body posture and gait by adding a lot of additional load onto the student while he or she walks, and the general consensus is that the gait of a backpack-wearing student differs substantially from the gait of an unburdened student [1]. Researchers studied the gait of people of all ages, with or without backpacks, by using force plates and camera systems to record data, although it is possible to use other types of measuring devices [2]. However, few take into account factors such as height and physical fitness [3], and virtually none have studied students between the ages of 15 and 18.

Review

Overview of Gait

Gait Cycle

The gait cycle represents the events between successive points of contact of a single foot. There are two sets of terms used to describe the gait cycle, shown in figure 1, but most researchers use the more recent set of terms created by the Rancho Los Amigos Hospital [4]. It consists of the “heel strike” and the “toe-off.” The gait cycle has two basic components: the swing phase and the stance phase. The swing phase (32%-38% of the gait cycle) occurs when the foot is completely off the ground, between a toe-off and

heel strike. It consists of three parts: initial swing, midswing, and terminal swing. The stance phase (62-68% of the gait cycle) occurs when the foot is planted firmly on the ground. It consists of five parts: initial contact, loading response, mid-stance, terminal stance, and pre-swing. The period when both feet are touching the ground is called the “double limb support time,” and the period when only one foot is on the ground called the “single limb support time.” Other terms used to describe gait are cadence, stride length, and step length. Cadence is the number of steps per minute, stride length is the distance between heel strikes of the same foot, and step length is the distance between the heel strike of one foot and the heel strike of the other [4].

Anatomical Reference System

Most clinicians and researchers use a standard anatomical reference system. A person is bisected into right and left halves by the sagittal plane, front and back halves by the coronal plane, and top and bottom halves by the transverse plane. Abduction of a limb segment refers to moving it away from the body, and adduction means the opposite. Flexion refers to bending a joint, whereas extension refers to extending the joint.

Figure 1. The gait cycle and gait terms [4].
Alice Li

Muscles and Joints

The Musculoskeletal System

The human body consists of over 200 joints, 206 bones, and about 640 different muscles. A possible description for the musculoskeletal system is a machine capable of applying forces to other objects. Muscles create forces; the more muscles used to create the force, the greater the force. All the muscles of the human body working simultaneously in the same direction could move about 22 tons. However, the muscles are arranged so that they work in pairs: one muscle the agonist, the other the antagonist. Working against each other prevents either from overstretching [5].

There are two types of contraction: isotonic and isometric. Isometric contraction occurs when the muscle is activated, but the length of the muscle does not change. Isotonic contraction occurs when the length of the muscle changes, and can be further divided into two types: concentric and eccentric. Concentric contraction, the focus of many studies occurs when the muscle shortens, decreasing the tension upon it. Eccentric contraction occurs when the muscle lengthens, increasing tension [2].

Hip Joint

The hip joint consists of the head of the femur and the acetabulum of the pelvis, as shown in figure 3. It can move 140 degrees forward and 15 degrees back, 30 degrees outward and 25 degrees inward, and rotate 90 degrees outward and 70 degrees inward [6]. When both feet are on the ground, no muscle contraction is needed to maintain that posture. However, in a single leg stance, such as during the single limb support phase of walking, the abductor muscles must exert torque upon the hip joint to counteract the torque exerted by the entire body’s weight [5].

Knee Complex

The knee complex receives very high loads during dynamic weight bearing, which is why it is one of the most common sports injuries. The knee complex consists of two separate joints: the tibiofemoral joint and the patellofemoral joint. An important part of the knee is the menisci, which act as shock absorbers for the joint and brings about normal movement between the two bones. The knee also uses two ligaments, the anterior cruciate ligament and posterior cruciate ligament, to stabilize the knee during tibia axial rotation [6].

Ankle and Foot Joint

The foot consists of 28 different bones, and the ankle, 3 bones. The ankle is a hinge joint, and consists of the tibiotalar, fibulotalar, and the tibiofibular joints. In addition to the ankle joint, the foot also consists of five metatarsophalangeal joints, where the toes connect to the rest of the foot. This joint is heavily employed during walking, especially during the toe-off phase [6].

During walking, most of the weight the foot bears is

Figure 2. Progression of the center of pressure upon the foot during normal gait [6].

distributed to the rearfoot, or the heel. During the heelstrike, the heel bears almost all of the force exerted by the rest of the leg. Figure 2 shows how the center of pressure upon the sole of the foot changes during the stance phase of a stride.

The ankle has a range of motion in the sagittal plane ranging from 10 degrees during dorsiflexion, or pointing the toes upward, to 12 degrees during plantarflexion, the inverse to dorsiflexion [6].

Normal Gait

In 1990 at Helen Hayes Hospital in New York, Kadaba et al. measured spatiotemporal parameters and joint angles of 40 healthy young adults, 28 male and 12 female. He found that the average stride length of his subjects was about 1.35 +/- 0.12 m, an average cadence of 113 +/- 9 steps/min, and a double limb support time of 61% of the gait cycle.

About a decade later in Korea, Cho et al.. performed a similar experiment but discovered different results which may be attributed to racial differences between Americans and Koreans, or the fact that Cho et al.’s experiment involved barefoot individuals, while Kadaba did not mention whether or not his subjects wore shoes.

In Cho et al.’s data, the average stride length is about 20 centimeters less than Kadaba’s average stride length for both males and females, which can be attributed to height differences between the two sets of subjects. There is also a 4 steps/min difference between the male cadences. However, their data on stance phase agrees around 61%, and both got similar results regarding female cadences.

In Kadaba’s data, the bold line represents the mean values and the dotted lines represent the deviation, while in Cho et al.’s, the dotted lines are the graphs of the females, and the bold lines are the graphs of the males. Their kinematic data seems to match fairly well regarding hip kinematics. As shown in figure 3, their experiments got very similar data in hip flexion and extension as well as hip adduction and abduction, with peak angles of about 40 ° and 6 ° respectively. There seems to be some discrepancy in transverse joint motion, or hip rotation. However, Kadaba’s bold line is an average, while Cho et al. did not average

Figure 3. Comparison of hip kinematics between studies. Cho’s data is on the left, and Kadaba’s on the right .

Figure 4. Comparison of knee kinematics between studies. Cho’s data is on the left, and Kadaba’s on the right.

the data from both genders. If the two were averaged, the graphs from both studies would look more similar.

As with the kinematics of the hip, the kinematics of the knee between the two studies are also very similar, with peak knee flexion angles of approximately 60° and peak knee varus angles of around 5°. Like with figure 4, Cho et al.’s graph of knee varus and valgus angles look dissimilar because he did not average the data from males with the data from females.

The kinematics of the pelvis also show some differences

Figure 5. Comparison of pelvis kinematics between studies. Cho’s data is on the left, and Kadaba’s on the right.

between the studies. The average pelvic tilt from Kadaba’s study is at about 15°, whereas the average pelvic tilt from Cho et al.’s study is approximately 10°. However, as the deviation is the same for both, and this difference could be attributed to the way the two studies defined their axes of reference. The other graphs look fairly similar, with both maximum pelvic obliquity and maximum pelvic rotation at 5°.

Gait Under Load

Double Strap Backpack

Wearing a backpack increases the double limb support time of the gait and decreases swing time. Wang et al. discovered this trend in 2001 in his experiment with collegeaged students [7]. In the next few years, several other researchers reported the same results with younger students, from age 9 to age 15 [3,8,9]. Connolly et al. discovered the double support time of middle-school students increased from about 19% of the gait cycle when the students walked without a backpack to 21% of the gait cycle when the students walked while wearing a backpack.

Similarly, Chow et al. reported that the mean double limb support time of adolescent girls increased from about 11.1% to 12.4% when the load increased from 0% of the wearer’s body weight (BW) to 15% BW as indicated in figure 6.

Although these changes in percentage are quite small, they do indicate that there is a small difference between the gait of a person wearing a backpack and the gait of a person not wearing a backpack.

There is also discrepancy over the spatiotemporal parameters, especially stride length. Pascoe et al. claimed that wearing any kind of bag decreased the stride length

of 11-13 year olds; in contrast, Connolly did not find any significant difference between loaded and unloaded stride lengths.

Both Chow et al. and Hong et al. also got results similar to Connolly’s. Hong believes that it is difficult to compare their results to Pascoe et al., as Pascoe did not make any mention of the walking distance or walking velocity of his subjects [10].

Critical Limit

Another topic of debate among the pediatric medical community is the “critical limit,” or the weight at which a backpack becomes too heavy and causes dramatic changes in gait, potentially contributing to injury or back pain [8].

Chow et al. claimed that the critical limit is between 10-12.5% of the wearer’s body weight using several parameters such as peak knee flexion and peak hip rotation moment.

Figure 7 shows the dramatic change in parameter 43, knee extension moment, as soon as the load is increased from 10% BW to 12.5% BW, illustrating the critical value at which the knee begins to react differently to the load.

However, even though the parameters indicate major changes in gait pattern, these changes may not necessarily be detrimental, especially when the rest of the body’s movements are taken into consideration [8].

In a different experiment, Hong had his 9 to 10 year old subjects walk four different distances in increasing order while wearing the backpacks of different weight and recorded their trunk inclination angles for each distance and weight, as shown in figure 8.

Note that there is a significant increase in trunk angle between 15% BW and 20% BW, suggesting that 15% BW is the critical load, rather than the 10% BW given by Chow’s experiment. The difference between Chow’s experiment and Hong’s experiment can be attributed to gender differences, as Hong used 9-10 year old boys while Chow used 10-15 year old girls. In addition, Chow came to his conclusion by analyzing gait parameters, while Hong observed the trunk’s angle of inclination.

Despite this evidence regarding the critical limit, it does not determine what the critical limit is exactly, or if it even exists. No studies have been done on whether consistently going over the critical limit causes any sort of injury, other than a few surveys of students which give somewhat vague results [1].

Conclusion

Many studies have been done on how wearing a backpack affects the person’s gait, all of which have slightly conflicting results. Although many of these differences can be attributed to demographic differences in the subject, there is still much research to be done. For instance, few researchers have looked into the differences between wearing a backpack on only one shoulder instead of on both shoulders. In addition, former studies have not taken into consideration the height and physical condition of their subjects, which may be a factor in a gait’s response to loading [3]. These factors may be especially significant in children and adolescents, as their bodies have not fully

inclination [9].

Figure 6. Effect of load weight on double support time.
Figure 7. Effect of backpack load on peak knee extension moment .
Figure 8. Effect of load on trunk

matured yet. There has also been no research done on 1618 year olds, with most researchers focusing on children around the age of 10-12 or fully grown adults. There is also no consensus on the “critical limit” weight, as different researcher’s claims range from 10% BW to 20% BW to no critical limit at all [8,9].

In the United States, the average backpack weight for an elementary school student is 17% BW, with some carrying up to 30% BW or 40% BW. Meanwhile, about 50% of adolescents complain of back pain, reaching more than 60% by adulthood. Back pain is mostly likely related to backpack usage, as wearing a backpack forces the back muscles to flex in response to the torque applied to the body by the backpack, so finding this “critical limit” may be key to preventing musculoskeletal harm [1].

References

[1] Cho, S. H., Park, J. M., & Kwon, O. Y. (2004). Gender differences in three dimensional gait analysis data from 98 healthy Korean adults.Clinical Biomechanics 19(2),145–152

[2] Cottalorda, J., Bourelle, S., & Gautheron, V. (2004). Effects of backpack carrying in children. Orthopedics 27(11), 1172-1175

[3] Winter, D. A. (2004). Biomechanics and motor control of human movement. (3rd ed.). New York, NY: Wiley.

[4] Connolly, B. H., Cook, B., Hunter, S., Laughter, M., Mills, A., Nordtvedt, N., & Bush, A. (2008). Effect of backpack carriage on gait parameters in children. Pediatric Therapy 20(4), 347-355

[5] Cuccurullo, S. (2004). Physical medicine and rehabilitation board review. New York, NY: Demos Medical Publishing. Retrieved from http://www.ncbi.nlm.nih.gov/ books/NBK27235/

[6] Watkins, J. (1999). Structure and function of the musculoskeletal system. Champaign, IL: Human Kinetics.

[7] Nordin, M., & Frankel, V. H. (2001). Basic biomechanics of the musculoskeletal system. (3rd ed.). Philadelphia, PA: Lippincott Williams & Wilkins.

[8] Wang, Y., Pascoe, D. D., & Weimar, W. (2001). Evaluation of book backpack load during walking, Ergonomics 44(9), 858-869

[9] Chow, D., Kwok, M., Au-Yang, A., Holmes, A., Cheng, J., Yao, F., & Wong, M.S. (2005). The effect of backpack load on the gait of normal adolescent girls. Ergonomics 48:6, 642-656

[10] Hong, Y. & Cheung, C. (2003). Gait and posture responses to backpack load during level walking in children, Gait and Posture 17(1), 28-33

[11] Pascoe, D. D., Pascoe, D. E., Wang, Y. T., Shim, D. M., & Kim, C. K. (1997). Influence of carrying book bags on gait cycle and posture of youths, Ergonomics 40(6), 631-641

[12] Kadaba, M. P., Ramakrishnan, H. K., & Wootten, M. E. (1990). Measurement of Lower Extremity Kinematics During Level Walking, Journal of Orthopaedic Research 8(3), 383-392

Featured Scientist: An Interview with Dr. Robert Lefkowitz

Left to Right: BSS Faculty Sponsor Dr. Jonathan Bennett, Tejas Sundaresan, Dr. Robert Lefkowitz, Emmanuel Assa, Halston Lim. Photo

Dr. Robert Lefkowitz is the James B. Duke Professor of Medicine at Duke University. Having studied medicine at Columbia University, he completed his medical residency at Massachusetts General Hospital, after which he shifted his focus into medical research. For the past forty years, he has studied cell signaling mechanisms and receptors, and is most known for his pioneering studies of the molecular pathways and structure of the G-protein coupled receptor. A Howard Hughes Medical Investigator, he received the 2012 Nobel Prize in Chemistry for his work. The BSS staff met with Dr. Lefkowitz for his insight and advice for the aspiring high school scientist after his delivery of the keynote address for the 2013 North Carolina Student Academy of Science meeting hosted on the NCSSM campus.

Can you think back to when you were in high school? Back then, what were your favorite activities and academic interests?

Let’s see what I looked like back then [Dr. Lefkowitz shows us his yearbook photo from 55 years ago]. This should be 1959. Orchestra, Dynamo. Dynamo was the literary publication. So I was an editor for not a scientific magazine, but a literary magazine. Swim squad. Biology Club. We had several [sports] teams. Interestingly, a couple of our teams were regularly among the best in the city. Of course, we had a math team, a chess team, etc… Did I have any extracurriculars other than those back then? Not really. Subjects, I loved chemistry really. I took AP Chemistry and that was my major actually… It’s really important to discover what your gifts are. Everyone has gifts, and it’s great to figure out what you’re really good at, because that’s what you want to emphasize. Things that come easy to you. It’s worth thinking about.

You had mentioned earlier in your talk that medical research didn’t really cross your mind until the end of your two years at the NIH, where you had initial “successes” in research. What advice do you have for individuals who are researching but who haven’t obtained similar “successes” yet in their fields?

Let’s say I went off to residency, having met with nothing but unrelenting failure for two years. There’s no way – I can’t imagine- I would have gone on into a career in research. But let me tell you something that’s really important; it’s going back to those failure things. This guy Kobilka, by the way [referring to Dr. Brian Kobilka, co-recipient of the 2012 Nobel Prize in Chemistry who was previously a post-doctoral researcher under Dr. Lefkowitz]…If you would have said to me, out of the 250 or so you’ve trained, who were the best: hands down, I would have said Kobilka and this other guy. Now, Kobilka met with no success in my laboratory for two and a half years. If he had left after two years, he’d be practicing cardiology. And this is scary. What I have observed is that someone who is good at [research] will ultimately succeed. If I take a look over the people I’ve trained and made a graph, success versus how long it took them to get

Credit: Brian Faircloth

something going, I would say, it’s a reverse correlation. The better people are, the longer it takes. Now it’s not in every case and I’m not saying p is less than .05, but in my mind, there is some correlation. Why might that be so? I have found that the best people are drawn to the most difficult and challenging problems. And the more challenging the problem, the longer it takes to make any headway at all. And two years of failure, in the big picture is nothing.

Many research scientists possess PhDs, while you have a medical doctorate. How has this medical background shaped your research?

You’d be amazed how many people who win basic science prizes never got a PhD. … One thing, getting an MD teaches you discipline – that’s for sure. It selects for disciplined people, people who are clinically trained, like Kobilka and myself. In my day, surviving an internship in residency wasn’t a mean feat. You couldn’t even sleep. It really solidified work ethic, etc… If you are a really good physician, [you are] learning how to interrogate a sick patient. It’s almost like being a prosecuting attorney learning how to cross-examine. Science is the cross examination of a problem: every experiment is a question. You got to ask exactly the right question. And that’s exactly what science is. You have to do the exactly the right experiment. And that’s what’s failure’s about. You ask the wrong question a hundred times. But each time, you learn that’s not the way to go. And then your question gets sharper and sharper. And finally, you ask exactly the right question, and bingo, you got it.

Can you tell us a little about the role of biology in business, based on your experiences. Do you have any advice for students who are interested working in the biotechnology industry?

It’s interesting. Name of my company is Trevena…. We started this company five years ago, and the company, the first three scientists, the nucleus that formed the company, were three of my senior post-docs, who had been very much involved during the previous five years in developing the body of science that formed the platform of the company- the idea you could signal down these different pathways. My first piece of advice: become a scientist first. Really get the background first. You want to be able to yourself to decide “is this a good opportunity?”. I would say become a scientist, and maybe do a post-doc, and then do business.

Is there any important advice for an aspiring scientist in academia?

Figure out what you like. There are three types of ways you can earn a living. One: you have a job, like my secretary, you work 9-5 and you take home a salary as a job. Two: you have a career. It’s not necessarily 9-5 and you do what you need to do to advance in the career. You have a skill you hone and you move up in a hierarchy. And then, the third category is a really small percentage: that’s folks like me. You have a passion. I’ve never felt I work for a living. Why am I doing this? It’s the same reason I was doing this thirty years ago. It’s my sandbox. I play. In a sense I’m always working, but it’s not. So if you can figure it out, is there something you’re just going to love? And, don’t take advice from anybody, you have to figure it out for yourself. In the end, you have to go with your heart. It’ll become obvious to you if you’re attentive. You spend your whole life doing what you do, and its much better if you enjoy doing it.

Twenty years from now, what areas in biology will be really important?

There are a couple. Cancer. We got a long way to go there. Cancer is hundreds of diseases. We’re just beginning, in the last decade or so, to really make some headway in understanding some of the basic mechanisms that go awry, mutations that lead to cancer, and how to develop drugs. That’s got to be a huge area. And neurobiology, how the brain works. We really don’t have, in my way of thinking, good drugs to treat severe mental illness. The drugs that we have are crude instruments. If you hear the side effects, sudden death, bleeding from the nose, vomiting, and cardiac arrest. It’s [drugs] a really blunt instrument. So I think neurobiology and understanding basic neuromechanisms at a molecular level and the tools are in place to do this. So those are my two areas in the next twenty years where fundamental biomedical research can impact upon disease. These two fields are really ripe for progress.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.