Page 1

SCOPE


Dear Reader, Having been first published 20 years ago, this anniversary edition of Scope aims to contain articles about subjects at the forefront of man’s scientific understanding. On the content front we have attempted to address the issue of a fair balance of articles to represent more adequately the full range of scientific interests at Haberdashers’. On the literary front we have further pushed the boundaries with a new ‘Review’ Section. This new section provides an opportunity for discussion on the implications and nature of science, as well as interviews with prominent scientists. I am grateful to the Science Society for working with us to provide an unparalleled calibre of scientists to feature in Scope. Indeed this new exploration of the very nature of science is of fundamental importance to those concerned with it. To borrow a phrase from the evolutionary biologist Richard Dawkins, the power of science lies in its ability to relieve “the anaesthetic of familiarity”, its innate curiosity, its power to remind us to disengage with existing in a sedative of dull ordinariness, and to gently poke us out of a slumber of ignorance. In understanding science we might also wonder how far science has come throughout history. One of the best analogies to keep us in perspective of man’s endeavours is to imagine the history of life on our planet as represented by the span of one’s arms; from the origin of life to the present day. From one end to your opposing shoulder, all that existed was bacteria. Invertebrates (which still constitute 95% of all life forms present today), eventually emerge at approximately your elbow furthest from the origin. Move along. The fingernail of your middle finger represents roughly the proportion of time that Homo sapiens has inhabited the planet. And finally, what of recorded human history? Wiped off in a nail filing. I think that puts into perspective the time frame and transcendental quality that science operates in. It is this very style and nature of debate and question I hope you will find herein. I am indebted to my team who have provided outstanding literature for the magazine, as well as lively debate and strong opinions about the construction of the magazine. It is a testimony to them that we have managed to move into full colour print and perfect binding, giving full justification to the high quality of the articles, without an increase in our budget. Lastly, I would like to thank Mr. Delpech, whose encyclopaedic knowledge, careful guidance and unfailing support have ensured the success of this magazine for many years. Enjoy the magazine,

Casey Swerner Chief Editor of Scope.

2 2


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:35

Page 3

CONTENTS PHYSICAL SCIENCES

REVIEW

The Riemann Hypothesis.....................55

The Rowboat’s Keeling......................22 0

The Role of Computational Automation in Science.............................................77

An Interview With Simon BaronCohen.................................................22 3 An Interview with Michael Lexton.....22 4

The Role of Pharmacogenetics In Modern Medicine...............................33 7 Cancer Therapy: Treatment & Treatment Realities.............................................33 9 The Application Of Nanotechnology In Medicine.............................................44 1

Visual Effects Engineering...................99 Entropy & The Theory Of Evolution....22 6

Heart Transplants: End Of An Era?.....44 2

Superconductors: Supermaterial of the Future?...............................................11 1

An Interview With Aubrey de Grey......22 8

Einstein’s Annus Mirabilis..................11 2

BIOLOGICAL SCIENCES

Quantum Of Solace: Heisenburg’s Uncertainty Principle... .. ... .. ... .. ... .. ... ..1 4

Abiogenesis: The Origin Of Life..........33 1

Swine Flu Epidemiology & Pathology............................................44 6

Atmosphere: Earth’s Great Defence...11 5

Evolution Of The Nervous System......33 3

Ozone Therapy In Dentistry................44 8

The Path Towards Finding a Magnetic Monopole...........................................117

Alzheimer’s: Hope At Last?................33 4

Bibliography.......................................55 0

Chemiluminescence...........................11 8

Haemophilia A: How Successfull Has The Genetic Engineering of Recombinant Factor VII Been?...........44 4

Cocaine Dependance: Implicications & Treatments.........................................33 5

3


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:35

Page 4

Physical Sciences

4


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:35

Page 5

Scope 2009/10 Physical Sciences

The Riemann Hypothesis Andrew Yiu An Introduction to Primes In chemistry, we are taught that all matter is made up of atoms. If you continuously cut something into smaller and smaller pieces, you would eventually end up at the atomic level. In this context, prime numbers are the atoms of maths. In other words, all positive whole integers greater than 1 can be expressed as a product of prime numbers. Suppose this claim is false. In this case, there must be a smallest number that cannot be expressed as a product of primes. Let this number be N. Also, since N cannot be a prime number (as it would just be a single product of that prime), it can be expressed as a product of two non-prime numbers. Hence:

N=ab (where a and b are both non-primes) N is the smallest number that cannot be expressed as a product of primes, so a and b can be written as a product of primes. Suppose a has prime factors p1p2···pm, and b has prime factors q1q2···qn, then N can be expressed as the product p1p2···pm q1q2···qn, which is in fact a product of prime numbers. There is a contradiction and therefore the claim is true.

anyone along the way can open it and look inside. Therefore, he takes one of the open padlocks and locks a box with the letter inside. Now no-one can open it until the package gets to Alice. Of course, the padlock could be picked but this is made very difficult to the extent where it is virtually impossible to do in a short amount of time. This is where primes come into the story. Firstly, to create the public key, two random large primes, p and q (at least 100 digits each), are multiplied together to create n. The totient of n (Written as φ(n))is a function defined as the number of positive integers less than or equal to n, which only share a common factor of 1 (also known as coprimes), e.g. The totient of 10 is 4 (1, 3, 7, 9). This is calculated by the product of (p-1) and (q-1). Then e is chosen as an integer between 1 and φ(n) which is coprime with φ(n), eg. If n=10, e can be 3, as 3 and 4 have 1 as their largest common divisor. Now n and e are published as the public key. The private key consists of n and d, the latter being the vital item for decryption. It must satisfy the relation:

de≡1 (mod φ(n))

Seemingly complicated but all it means is that (de – 1) divides into φ(n) without remainder, e.g. If n=10, φ(n)=4, e=3, then d can be 7, as de - 1= 20, which divides cleanly into 4.

Not only this, but the Fundamental Theorem of Arithmetic also states that an integer greater than 1 can be written as a unique product of primes i.e. there can only be one set of primes that multiply to make a certain number. So if individual prime numbers are like atoms, products of prime numbers are like strands of DNA, with each number having a distinct strand that makes it different from the rest.

So now that the public and private keys are made, a message can be encrypted and sent. Say Bob wants to send message M to Alice. He converts it into an integer m and calculates the value of:

What makes prime numbers important?

Once Alice receives the message, she would calculate:

A very important use of prime numbers is in the world of internet security. Public Key Cryptography is a popular method of shielding communication from third-parties by the use of an asymmetric key system. The basic idea is that a user has a public key and a private key. The public key is published in a directory of other network users and can be accessed by anyone. The user keeps the private key secret. In order to send a message to another network user safely, you would take his/her public key and encrypt the message. When it is sent, it would be almost useless to any third-parties, as having the encrypting public key would be of no use. To read the message, a decrypting key is necessary and this is only in the hands of the receiver. To help visualize this, imagine two people called Alice and Bob. Alice gives out open padlocks, for which only she has the key. Bob wants to send a confidential letter to Alice but knows that

c= me (mod n)

cd = m (mod n) As a result, she recovers the original message from Bob. As a working example, say p and q are 11 and 13 respectively. The value of n would be 11 x 13 = 143. The totient is (p-1)(q-1), so φ(n)= 10 x 12 = 120. 7 does not share a common factor with 120 apart from 1 so the value of e can be 7. The public key is 143, 7. A solution for d can be 103, as de –1 = 720, which divides into 120 without remainder. The private key is 143, 103.

end, Alice receives the original message m from Bob. The picking of this lock is done by factorising n into p and q, then proceeding to work out (p – 1)(q – 1), which would allow the picker to work out d, enabling decryption of an encrypted message. The security therefore relies on the extremely long process of factoring large numbers. This is not a question of patience, factoring a potential 200 digit number using brute force would take several times the current age of the universe even if the attacker is in possession of a powerful computer which can compute millions of numbers per second. As there is currently no algorithm to solve these large factorisations, this technique of public key cryptography is thought of as being very safe. However, this cannot be proven as of yet. In the words of Bill Gates;

"Because both the system's privacy and the security of digital money depend on encryption, a breakthrough in mathematics or computer science that defeats the cryptographic system could be a disaster. The obvious mathematical breakthrough would be the development of an easy way to factor large prime numbers.” The Prime Number Theorem and the Riemann Zeta Function The first step in making the factoring process easier is to discover patterns in the primes and hence, a way of predicting where they will come. The problem is that the distribution of primes is seemingly random and they show little evidence that there is any kind of rule that governs whenever they show up. However, some patterns do occur. For example, all primes but one are odd, all primes but two are next to a multiple of 6 and, though not proven, the frequency of primes appearing seems to diminish as the numbers get bigger. The Prime Number Theorem can be written as:

 (n)~ n / ln(n)

 (n) is the number of primes that are less than a certain real number n (e.g.  (15) = 6 for (2, 3, 5, 7, 11, 13)). This number is approximate to n divided by the natural logarithm of n. Despite it providing a good idea of  (n), the theorem is still just an estimate with a margin of error, though the error gets smaller as n increases. Another approach comes to mind. The Riemann Zeta Function is defined as:

Bob’s message is converted into the integer m, which equals 5. Following c= me (mod n), c = 78125 and in mod 143 (the remainder from dividing by 143), it is 47. For Alice to decrypt this, cd = m (mod n) is used. 47103 (mod 143) = 5, as 47103 has a remainder of 5 when divided by 143.. In the

This function was initially discovered by the 5


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:35

Page 6

Scope 2009/10 Physical Sciences

great Swiss mathematician Leonard Euler, who limited s to a real number value greater than 1, the reason being that the answer would be a finite one (eg. For s=2, the function would equal 1/12 + 1/22 + 1/32 + 1/42 ...which converges to a value of  2 /6.) If s= 1, (1+ 1/2+1/3+1/4...) the value would diverge to infinity, a sequence known also as the harmonic series. So where is the connection with primes? Euler also proved that:

The left side shows the zeta function, while the right side shows the infinite product for all primes. A huge step forward was taken by the influential German mathematician Bernhard Riemann, after he redefined the function for all complex numbers s ≠ 1, which he then used to investigate the pattern of primes. His hypothesis remains to be arguably the greatest unsolved problem in mathematics.

a in the complex number s in the function. The trivial zeros are the answers generated when s is a negative even number (ie. ζ(-2) = 0). However, the ones that interested Riemann were the ones made using complex numbers. An infinite number of zeros lie between real parts 0 and 1, also known as the critical strip. Riemann went one step further and conjectured that ALL zeroes, which are not from negative even numbers, are located on the line real part ½. This is shown in the diagram to the right. Therefore, where ζ(s) = 0, s must be in the form s= ½ + bi

If the hypothesis is true (and bear in mind that this is a very simplified approach), one of the most significant consequences would be a closing in the gap of the Prime Number Theorem. This could (although this is not certain) result in a breakthrough in large number factoring, and as described before, may spell disaster for the Internet world. The Riemann Hypothesis is such an important question, that it is one of the seven Clay Mathematics Institute’s Millennium Prize Problems, with the institute awarding $1,000,000 to anyone who can solve it.

The Riemann Hypothesis A complex number comes in the form a+bi, where a and b are real and i is imaginary. i is equal to the square root of –1, something that cannot be defined as a real number. The values of real numbers can all be plotted along a onedimensional line, which does not show much about the pattern of primes. On the other hand, when complex numbers come into the question, a separate line can be plotted, perpendicular to the real number line, and patterns can now be seen in two dimensions with two axes (Consequently, calculus can now also be used for analysis). This is called the complex plane. Points are plotted with the xvalue for the real part and the y-value for the imaginary part.The real part of any non-trivial zero of the Riemann zeta function is ½.

Figure 1 Showing the general idea of the ‘complex’ plane

The Riemann zeta function was already described earlier and the real part refers to the 6

Figure 2 Complex plane showing the critical line and strip for the Riemann Zeta Function.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:35

Page 7

Scope 2009/10 Physical Sciences

The Role Of Computational Automation In Science Matthew Earnshaw Technologies that enhance our data collection and storage capabilities continue to change the face of science. Traditionally, scientists have carefully hypothesized and collected data, spending many years of dedicated work trying to determine its significance. This traditional scientific model is set to change as we continue further into our digital age. The data stream from the detectors of the Large Hadron Collider reaches up to 300GB/second . The European Space Agency's Gaia mission will give us the electromagnetic spectra and astrometric positional data of approximately one billion stars. These projects have special dedicated networks to collect and store this vast amount of data. As our data collection abilities continue to expand, we will be forced to look to methods of computational analysis to help us effectively utilise the data and continue to make scientific discoveries. Soon the traditional model of discovery will no longer remain a feasible option. There is no doubt that computers are faster than humans. IBM’s Roadrunner, the fastest supercomputer in the world, can sustain a performance of over one quadrillion (1015) floating point operations, or calculations per second, approximately 50000 times faster than the average modern PC. Computer technology has unequivocally revolutionised science and has permitted its expansion far beyond what anyone would have predicted a few decades ago. Scientists have widely harnessed this astonishing calculatory power, with supercomputers being heavily utilised in disciplines ranging from meteorology to astronomy. However, as we seek to explain new scientific phenomenon from today's expansive data sets, we need artificially intelligent systems that can distill the laws themselves from the inputted data, in addition to the computers that merely calculate results based on known, traditionally derived, laws and principles. In an age where data can be obtained and handled in such abundant quantities, it is surprising that automated computational analysis of this data has not advanced in step with our data handling and collection abilities. Perhaps there is a certain disconcerting feeling associated with robotic automata replacing the human worker, and therefore an underlying resistance to such systems. Nevertheless, there has at last been some recent progress in this field. In "Distilling Free-Form Natural Laws from Experimental Data", Cornell University’s

computational biologist Michael Schmidt and computational researcher Hod Lipson describe their algorithm, a set of computational instructions, which is able to automatically reverse engineer non-linear natural systems. Armed with only a few simple mathematical building blocks, groups of basic operators; addition, subtraction, division and multiplication, the algorithm was able to automatically identify specific fundamental laws of nature without any prior knowledge about physics, kinematics or geometry. Lipson said, “One of the biggest problems in science today is moving forward and finding the underlying principles in areas where there is lots and lots of data, but there's a theoretical gap … I think this is going to be an important tool.” Schmidt and Lipson’s algorithm first takes the derivative of every pair of variables from a set of collected raw experimental data to see how they vary with respect to one another. It then randomly assembles the simple mathematical operators to produce some random initial equations. Symbolic derivatives of each pair of variables for these initial candidate equations are taken and then compared to the numerical derivatives taken from the raw experimental data. By finding the difference between the values, the system is able to evaluate the relative accuracy of the randomly generated equations, most of which are initially trivial invariants.

converged on the Hamiltonian and Lagrangian equations, the respective energy laws of the systems. Furthermore, when the computer was given acceleration data, it produced equations of motion corresponding to Newton's second law. In the 1750s Joseph-Louis Lagrange spent 20 years researching and deriving his equations and Isaac Newton worked with “obsessive devotion” in the 1670s to draft his laws of motion. In 2009, Schmidt and Lipson's system made the very same discoveries - in a matter of hours. While the algorithm’s ability to derive these laws without any prior knowledge of physics, kinematics or geometry is astonishing, this is an artificial environment because we do indeed have prior knowledge of these things. By seeding the algorithm with laws the system had already derived for the simple pendulum, Schmidt and Lipson reduced the computation time to derive motion laws governing the double pendulum from forty hours to seven hours.

The genius of the algorithm is in its so called “genetic” design. The best equations are upheld and are continually modified and retested, while the trivial results are discarded. Over time the equations ‘evolve’, hence “genetic algorithm”, and after many iterations the computer converges ever more closely towards producing equations that are able to replicate the raw experimental data and accurately project data for yet untested states. An algorithm that is able to find invariants and relationships in data is one thing but Schmidt and Lipson’s program can also decide what level of significance they have.

In a stroke of scientific genius Schmidt and Lipson decided to present their algorithm with randomly generated data in an attempt to fault their own system. The algorithm successfully failed to distil any equations from the data. The fundamental and generic nature of the algorithm means that the algorithm can be potentially applied to just about any dynamical system from weather patterns to population genetics. There may even be cryptanalytical applications, finding predictabilities in cryptographic systems and in evaluating the entropy of pseudo-random number generator algorithms. The laws of motion and energy found by the algorithm likely pale in comparison to the complexities of laws governing complex biological systems like the brain. Although such systems are notoriously complicated, thus making the discovery of the laws that govern them a real challenge for scientists, the principles of Schmidt and Lipson’s algorithm are theoretically scalable to this level.

With the most basic of apparatus Schmidt and Lipson’s algorithm was able to derive well established natural laws from experimental data of simple physical systems such that you would find in any school's Physics department, like an air track oscillator and a double pendulum. When they ran their algorithm on position data obtained from motion tracking the pendulum, the algorithm soon converged on the equation of a circle, as the pendulum is confined to a circular path. When they fed it with position and velocity data over time for the air track and pendulum, the system eventually

Now that such an algorithm has proven its functionality, what’s next? What if the algorithm could not only analyse experimental data but also be able to independently obtain it in the first place? Scientists at Aberystwyth University are experimenting with a robotic ‘colleague’ whom they have dubbed “Adam”. While robots have long helped scientists in the lab with laborious tasks like the sequencing of genomes or obtaining nuclear magnetic resonance spectrographs, Adam is the very first robot to have made a scientific discovery on its own without any additional human input. This is the 7


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:35

Page 8

Scope 2009/10 Physical Sciences

Figure 1 (above) Showing the derivation of the laws governing the motion of a double pendulum. (A) A computer observes the behavior and dynamics of a real system, (B) collects data using motion tracking cameras and software. It then automatically searches for equations that describe a natural law relating these variables. (C) Without any prior knowledge about physics, kinematics, or geometry, this algorithm found conservation equations and invariant manifolds that describe the physical laws these systems obey. first time a robot has been able to formulate a hypothesis, design and perform an experiment, and return meaningful results. Crucially, Adam is also able to form new hypotheses based on its own results, thus closing the feedback loop. Don’t worry though, despite the worryingly anthropomorphic name, Adam comprises a room full of interacting scientific instruments such as centrifuges, incubators and computers, and bears no human resemblance. Adam was given knowledge about yeast metabolism and was also provided with a database of information about genes and proteins related to the metabolism of various other organisms. Adam devised a hypothesis and designed experiments to test it. In order to find out which genes coded for particular enzymes, Adam systematically grew yeast cultures with certain genes removed, and kept track of how well these new strains grew. Adam recorded the results and was able to learn something basic about the gene depending on the growth of the cultures. The robot is able to carry out over one thousand such trials each day. Over the course of several days Adam devised and performed experiments for twenty hypotheses and twelve of these hypotheses were confirmed. For example, Adam hypothesised that three genes it had identified through previous trial and error experimentation coded for an enzyme responsible for producing the amino acid lysine. This result was manually confirmed by Adam’s mortal human counterparts.While Adam's findings are relatively simple they are nonetheless useful and Adam was able to make the discovery in a much shorter time than his colleagues would have been able to. "It's certainly a contribution to knowledge. It would be publishable”, said Ross King, a computational biologist at Aberystwyth university leading Adam’s development. Adam is rather specialised as his array of instruments are very biologically focussed, making the range of performable tasks limited to highly repetitive brute force style 8

experiments where many hundreds of samples must be identically analysed. Adam’s algorithm is currently not quite as elegant as what Schmidt and Lipson have devised being more of a trial and error approach, rather than a genetic algorithm but the combination of these two systems holds great promise. When coupled with an inherently more generic algorithm that has potential to be able to distil much more complex results from data, who knows what our artificially intelligent colleagues will be able to achieve in the future. Should scientists fear for their jobs over these emerging technologies? No, at least not yet. The processes of initial creativity required for input and the recognition of significant output are still heavily reliant human judgement. Humans are indeed required to curate the system and explain the significance and implications of its findings. Schmidt and Lipson's algorithm produces a shortlist of the ten most accurate equations it has found but a human researcher must hand pick the most significant and interesting equations from the computer generated shortlist and balance the equation's ability to reproduce and extrapolate from the data set, and its parsimony, the number of terms it contains. Although it seems as if Adam only needs non-intellectual human intervention to remove waste and ensure it has sufficient supplies, the task of interpreting and using the results effectively, such as creating a new drug, remains solely a human concern. In any case, the robot’s range of tasks remains very limited. Quite the opposite of scientists losing their jobs, the near future holds increasingly productive human-robot scientific partnerships and an increase in the rate of scientific discovery in this increasingly data rich era. While neither system is perfect yet, for the first of their kind they are exceptionally impressive. The ability of these systems to derive fundamental physical laws in a day and make an independent new scientific discovery, gives great promise to the role of such systems in

the future. For the moment, the unique recognition and creativity of humans remains a requirement. As this field matures however, the question will be asked, For how long?.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:35

Page 9

Scope 2009/10 Physical Sciences

Visual Effects Engineering Sahil Patel From adverts in our televisions at home to cinemas, nowadays we are surrounded by some form of visual effects wherever we turn. Visual effects have become a fundamental element in almost all of the films made but there is a lack of understanding of how photo realistic images are made. Visual effects are a broad, open aspect of films and there are several definitions that have been adopted over the years so the best definition that encapsulates this topic is: “Practices, methods and technologies relating to the creation and manipulation of elements within moving images that enable storytellers to guide an audience’s conception of time, space and/or reality, thereby eliciting a desired emotional response and/or conveying critical story information.” The Basics of VFX Motion Capture (Mocap) involves translating real human movements onto a 3-D model that mimics those actions and ultimately behaves like a human does. The process is relatively simple; an actor wears a suit dotted with markers and the actors’ performance is captures and stored as animation data. When the data is seen on a computer, the dots that represent the outline of the actor can be mapped onto a 3-D model. The model can be designed as required but still retains the human actions from the motion capture session. One huge advantage is the freedom to place and move the camera wherever desired so the perspective of the shot is flexible at all times. However, the technique is limited in the sense that creating characters that do not follow the laws of physics cannot be captured. Animation and ‘CGI’ There are several techniques which all classify as animation so there is no single definition of animation. However, the three types in chronological order are traditional, stop-motion and computer animation. 90% of all animated productions are computer generated and traditional has almost completely been phased out of use since children’s cartoons can now be computer generated. CGI stands for Computer Generated Imagery and is a term used often and sometimes haphazardly. Computer animation is by definition, the same as CGI but nowadays films like Wall-E are categorized as animated films so CGI would therefore generally refer to the creation of images intended to appear realistic and blend into the live action of the frame. Most films are shot in a 2:35:1 aspect ratio at 24Hz (frames

per second). A single frame of a complex animated film such as Wall-E would range from 2-15 hours to render completely. Average rendering times for one frame of CGI have increased in the last 10 years which doesn’t seem logical when GPU/CPU speeds and memory space have been doubling every 18 months. The reason for longer rendering times, even with faster computers, is that big-budget films are demanding more complex and realistic shots and with advances in rendering and ray tracing capacity, films can afford to be more ambitious. The perfect and well-known example of intricate CGI is the Transformers franchise where any frame which has 3 or more transformers moving in it will typically take 38 hours to render. The key to making an image realistic is ray tracing (see definitions) which makes a CGI model blend and react realistically to light sources in the live-action part of the image. The realism of a visual effect is often judged by how detailed or sharp it is, however, its reaction to light and other objects surrounding it are the determining factors to impressive CGI. So an image with a resolution of 4000 pixels by 3000 pixels but rasterised will look far worse in terms of realism than the image above that has been ray traced at 1920x1080 pixels. In conclusion, we come back to our definition of visual effects to guide an audience’s perception of reality by obtaining as realistic an image as possible. Extending rotoscoping: Blue/Green screen I have explained (see definitions) rotoscoping as the way of creating composite images by pasting or drawing over filmed backgrounds use of traditional rotoscoping was in the original three Star Wars films, where it was used to create the glowing light saber effect, by creating a matte based on sticks held by the actors. To achieve this, editors traced a line over each frame with the prop, then enlarged each line and digitally added the glow. In the latter three Star Wars films, green screen was used to create the light saber effect. The actors still use sticks which make it seem like old fashioned rotoscoping is still being used but sticks are used to make saber sequences easier to film practically. The light saber is added through CGI in post-production. Green screen is effectively the modern alternative to rotoscoping as it is the same idea of modifying or adding a CGI element to a filmed scene. The process is simple; the green screen is green because it the shade which is furthest away from skin colour so a good separation from the foreground (actor) to the background can be achieved. The desired background can be added in post-production

(keyed out). A blue screen serves the same purpose as a green one but is less sensitive to cameras so more light is needed to film against a blue screen. The reason for green being more sensitive is that more pixels are allocated to the green part of the spectrum than red or blue.

Definitions Traditional animation- By using drawings for each frame (created by an artist), the drawings are photographed onto motion picture film. The final product will run at about 8 frames (drawings) per second to create a fluid animation. Rotoscoping- Invented by Max Fleischer, rotoscoping is a way tracing an object, a silhouette (called a matte) is created that can be used to extract that object from a scene for use on a different background. Stop-motion animation- Using physical objects and/or models and manipulating them and capturing the modified models onto film one by one. The most famous modern example of stop-motion animation is Wallace and Grommit: The Curse of the Were-Rabbit which used clay models to capture each frame. Computer animation- This is essentially the successor to stopmotion animation in the sense that the computer creates each frame, but each new image is advanced in the time frame. 3-D models can be built using simple animation software and by rigging the model with a virtual skeleton, facial detail can be added before rendering the final frame. Bayer Filter- filter pattern used for sensor chips in a digital still camera. More pixels are dedicated to green than to red and blue, because the human eye is more sensitive to green, producing a better colour image.

9


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 10

Scope 2009/10 Physical Sciences

So since blue screens are impractical, they are used when the foreground has green in it. Therefore, the imperative rule to green screening is to keep whatever is in the foreground a different colour to the bright green backdrop so the camera distinguishes the two planes of images. An example of how vital this rule is is the scene in Spider-Man where the both Spider-Man and Green Goblin are in the air, Spider-Man had to be shot in front of the green screen and the Green Goblin had to be shot in front of a blue screen, because Spider-Man wears a costume which is red and blue in color and the goblin wears a costume which is entirely green in color. If both were shot in front of a same colour screen, one character would have been partially erased from the shot. 100% CGI So far I’ve only discussed animation and composite images which mostly involve green screening. Completely CGI images are more difficult to make them look real as an entirely virtual environment has to be made. Figure 1 is an example of an explosion of a rendered CGI model and some aspects of how an artist would manipulate the model to react realistically (pyrotechnics). The future of VFX With the emergence and success of new formats like Blu-ray, it is clear that things must be changing in visual effects. The next two examples are some of the upcoming technologies which could revolutionise cinema and the way we can create a “desired emotional response”. Real-time ray tracing: Earlier I stated the problem that ray tracing ahead of time is the only way to create realistic graphics like we see in modern films. Videogames use rasterisation to render images almost instantly and produce underwhelming results every time. A new graphics company called Caustic Graphics believes it has uncovered the secret of real-time ray tracing with a chip that enables your CPU/GPU to shade with rasterisation-like efficiency. The new chip offloads ray tracing calculations and then sends the data to your GPU and CPU, enabling your PC to shade a ray traced scene much faster. Whether Caustic's unique ray tracing extensions are good enough to match or surpass film ray tracing remains a mystery but real-time ray tracing is developing at a rapid rate; on June 12 2008 Intel demonstrated Enemy Territory: Quake Wars using ray tracing for rendering, running in a basic high-definition resolution. The game operated at 14-29 frames per second. The demonstration ran on a 16-core Tigerton system running at 2.93 GHz. Emotion capture: Motion capture is used often to create surreal characters’ faces; however, audiences are always brought down to reality when they see subtle facial expressions and eye movement. The current technology is focused on creating more and more detailed models and simply pasting the actors’ performance into the model. This can result in 10

film like The Polar Express and Beowulf which although won Oscars in technical categories were critisised for creating characters that suffer from problems like ‘dead eyes’ which almost wastes the performance of the actor during motion capture as the actions are lost in rendering. Over the last few years, director James Cameron has created a motion capture technique (unofficially called Emotion capture) that transfers 99% of the actors’ performance into the final image. The key ideas and techniques are promising and their first use in a feature film has been in Avatar. We can expect top directors to start using these new technologies in the future.

Definitions R ay Tr ac in g - A process for rendering 3-D models by tracing light rays as they would bounce off the model from a light source. This ensures the model reacts to light realistically by taking into account reflection, refraction, depth of field and high quality shadows. See MicroScope Ray Tracing article for more detail. R as t er is a ti on - Rasterising an image is when an image in vector format (polygons) is converted to raster format which is into pixels ready to be displayed immediately on a computer screen. Compared to ray tracing, rasterisation is much faster and more suited to video games which run in real-time and need to react instantly to user input. Ray Tracing is better for visual effects in films where still frames can be rendered ahead of time so the effects are ready just before the film is released. Sp r it e V ox els - A sprite is a term for any 2-D/3-D image that can be incorporated into a larger graphic animation. Voxel stands for VOlume piXEL which is a pixel spread over 3 dimensions rather than 2 in a standard pixel. So using a sprite voxel means animating a piece of the frame elsewhere and then incorporating it into a larger scene. Voxels are also used in Medical imaging such as tomography. Har d Dy n ami cs - This is the name for a visual effects program to make pieces react according to Newton’s laws and equations that physics software can apply to an interactive environment so basic concepts such as momentum, friction and displacement are applied to the frame.

Figure 1: Creating a CGI Image (above) Top image: ‘hard dynamics’ along with multiple collision objects are used to create geometric damage. Second image: image sequences of explosions are fire were timed and coloured to be used together to self-illuminate sprite voxels to create a fiery explosion Third image: the geometric damage incorporated with timed explosions shows the image at about 60% completion. Bottom image: the final ray traced screenshot. The differences to above is the contact of the missile hit can be seen and the damage it causes on the other side of the ring.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 11

Scope 2009/10 Physical Sciences

Superconductors: Supermaterial Of The Future? Neeloy Banerjee Onlookers with glazed eyes and slightly befuddled brains are usually somewhat confused by what the physicist may talk about with glee and anticipation. Physicists are, however, on the verge of discovering something with profound everyday applications; a room temperature superconductor. Superconductors are materials that have zero electrical resistance below a specific critical temperature, and their applications are tremendous. The behaviour of superconductors was not well understood until 1933, when Walter Meissner and Robert Ochsenfeld stumbled upon a property of superconductors- they repel magnetic fields. A magnet moving by a conductor induces currents in the conductor. This is the principle on which the electric generator operates. But, in a superconductor the induced currents exactly mirror the field that would have otherwise penetrated the superconducting material - causing the magnet to be repulsed. This phenomenon is known as strong diamagnetism and is today often referred to as the "Meissner effect”. The Meissner effect is so strong that a magnet can actually be levitated over a superconductive material. The Meissner effect also helps explain how the superconductor can transmit current for an infinitely long time. We already know that superconductors oppose a magnetic field. It does this by setting up electric currents near its surface. It is the magnetic field of these surface currents that cancels out the applied magnetic field within the bulk of the superconductor. However, near the surface, within a distance called the ‘London penetration depth’, the magnetic field is not completely cancelled; this region also contains the electric currents whose field cancels the applied magnetic field within the bulk. Each superconducting material has its own characteristic penetration depth. Because the field expulsion, or cancellation, does not change with time, the currents producing this effect (called persistent currents) do not decay with time. Therefore the conductivity can be thought of as infinite: a ‘superconductor’. In the following years, a plethora of superconductors were found, many of them alloys containing Niobium or Silicon. The critical temperature of superconductivity of each of these alloys was always slightly higher than the last. A shock to the scientific world occurred when in 1986, researchers found a ceramic superconductor whose critical temperature was 30K (Kelvin - the SI unit of temperature), the highest recorded at that time. What’s more, ceramic materials are, at room

temperature, insulators. The material behaved in a way contrary to all contemporary predictions and so began a flurry of activity within the scientific world to test copper-oxide substances (cuprates). As superconductors were found that had temperatures higher than that of liquid nitrogen (an easily attainable coolant), the world looked on in bewilderment at the speed of the rise of the critical temperature. Quantum Explanations The way to imagine what happens in a superconductor is to visualise the electrons pairing up with each other. Usually, due to them having the same charge, electrons would repel, but at such low temperatures and low energies they encounter an attraction which causes them to form Cooper Pairs. These pairs of electrons require only small amounts of energy to be separated into the individual electrons, explaining the low temperatures necessitated by the superconductors. Their pairing constitutes Bosonic behaviour, as both electrons in the pair occupy the same energy level. Due to the cold temperatures the vibrations of the atoms in the material are diminished. The mathematical models of this refer to the vibrational energies of the molecules as “Quantum Harmonic Oscillators”. Since each atom is bonded to other atoms, they do not all vibrate independently of each other, so the vibrations take place as collective modes throughout the material. There are equations which define the energies of the vibrations, and they explain that the energy levels are quantised, meaning that the oscillators can only accept or lose discrete amounts of energy and not a continuous spectrum of it. The Cooper Pairs of electrons do not have an energy equal to the discrete level needed by the atoms in the material, and as such pass through them uninhibited. Resistance is created by electron collisions with vibrating atoms, and since those are avoided the resistance of the material drops to zero. Practical Applications Working on the assumption that materials with critical temperatures >273K (zero celsius) will soon exist, the applications of these conductors are wide and varied. One of the first things to be truly revolutionised by superconductors will be transport through magnetic-levitation. The strong, easily maintainable magnetic field created by the superconductors would be able to fully support trains, effectively eliminating friction between the rails and wheels. Furthermore, no electrical energy would be wasted as heat (since no energy is dissipated due to zero resistance). Another useful application of superconductors would be in electricity pylons. Currently, high voltages are used to

keep the current, and thus energy lost as heat, as low as possible. But if no energy were to be lost as heat again, the need for high voltages would be reduced and millions of pounds would be saved in step-up and step-down transformers. Room-temperature superconductors would cheapen Magnetic Resonance Imaging, removing the need for costly cooling systems and allowing more people to be scanned and at a lower price. However Korean Superconductivity Group within KRISS has carried biomagnetic technology a step further with the development of a double-relaxation oscillation SQUID (Superconducting QUantum Interference Device) for use in magnetoencephalography. SQUID's are capable of sensing a change in a magnetic field over a billion times weaker than the force that moves the needle on a compass. With this technology, the body can be scanned to certain depths without the need for the strong magnetic fields associated with MRIs. Finally, the most famous use of superconductors and an area where they can make a real difference is in particle accelerators. Originally the Superconducting Super-Collider in Texas required them, and more recently the Large Hadron Collider (what is a physics article without reference to the LHC?). Both of these particle accelerators aim to recreate the moments after the Big Bang and need strong magnetic fields to accelerate the protons to high enough speeds in order to collide them at sufficient energies. The magnets they use have to be cooled to near absolute zero (0K) in order to be superconducting and earlier this year a malfunction with the cooling system meant that the magnets were raised above their critical temperature, delaying the whole operation by five months while the magnets were repaired. That catastrophe would not have happened if room temperature superconductors had been available. In Conclusion To put everything into perspective, physicists talk of the exponential advancement in all branches of physics, from astronomy, through particle-physics to zeolite-materials engineering. However, the rapid advancement in the theory and synthesis of superconductors is something truly remarkable and in the coming years may well be one of the most important areas of research in physics.

11


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 12

Scope 2009/10 Physical Sciences

Einstein’s Annus Mirabilis Casey Swerner

threshold frequency, the light is absorbed and electrons are emitted.

Albert Einstein is one of, if not the most, renowned scientists on this planet. In 1905, he revolutionized modern physics with his theory of special relativity and went on to become a twentieth-century icon- a man whose name and face are synonymous with "genius." 1905 was a very good year for Albert Einstein indeed. In a frenzy of work, Einstein produced 4 papers which he himself called “very revolutionary’’. Before one goes into the detail, we must explain why these papers were so very important. They come at a time where many widely accepted that “There is nothing new to be discovered in physics’’ as told by the Lord Kelvin to the British Institute of the Advancement of Science. Newtonian mechanics were widely accepted as the absolute truth, and thus physics was regarded as finished business. Yet, it was Albert Einstein, the patent officer who had been unable to get a teaching post at any university in Europe, who had the imagination to revolutionize our understanding of physics.

"On a Heuristic Viewpoint Concerning the Production and Transformation of Light” This is the paper that gave Einstein his only Nobel Prize. The paper can be seen as one of the fundamental starting points for quantum theory, one of the most important aspects of theoretical physics to date. Ironically, this quantum theory is one which Einstein became eerily skeptical about in his later years which he spent trying to dispute until his death. Light and other electromagnetic radiations, such as radio waves, are obviously waves— or so everyone thought. Maxwell and Lorentz had firmly established the wave nature of electromagnetic radiation. Numerous experiments on the diffraction and scattering of light had confirmed this. Imagine the shock when, in 1905, Einstein argued that under certain circumstances, light behaves not as continuous waves, but as discontinuous, individual particles. Although Einstein was not the first to break the energy of light into packets, he was the first to take this seriously and to realize the full implications of doing so. (It is important to note here that Einstein’s “light quanta’’ and “photons’’ are the same entity, and thus may be referred to interchangeably.) In essence, Einstein explained that light may be seen as a wave and separate particles at the same time, known today as the waveparticle duality. How? When a metallic surface is exposed to electromagnetic radiation above a certain 12

“A New Measurement of Molecular Dimensions & On the Motion of Small Particles Suspended in a Stationary Liquid’’ While this paper may seem, on the face of it, rather mundane, it is one which lays down the principles of Brownian motion, and thus allowed other scientists to set about proving the existence of the atom, which at the time had not been proven conclusively.

Figure 1 Diagram of Lenard’s results, where light strikes the surface of a metal causing electrons to be released. In an experiment, carried outby Phillip Lenard, cathode rays (electrons) are emitted by single coloured light beams hitting a metal. When he attempted to increase the intensity (brightness) he observed that the emitted electrons did not jump with increased intensity. Rather, a greater number of electrons were freed at the same speed. Further experiments proved that the frequency (i.e. infra-red, red, violet, ultraviolet) gave the effect of more energy to the electrons. This proved at odds with the wave theory of light. If the light behaved as a continuous wave, then an increase in the intensity should give an increase in the intensity of the waves, giving the particles extra speed. Einstein took this data and hypothesized that light “consists of a finite number of quanta’’. Far from simply using theoretical knowledge, Einstein explored whether photons behaved much like a gas, which we know is composed of particles. He found using various statistical formulas (e.g. Boltzmann entropy formulae) that photons behaved similarly to a gas. These findings agreed with Lenard’s experiments and thus the idea of photons was born. It is important to note that he did not do away with wave theory; which he felt useful as it described light as statistical averages of countless quanta particles. This duality is a simultaneous event, and not an ‘either/or’ situation. In conclusion, the photon, can be considered as the energy released when an electron, that has jumped from a low energy orbital to a high energy orbital, returns to its original position (thus the photon equals the difference in energy of the electron), and depending on its magnitude of the change, will be emitted as a wave form within the electromagnetic spectrum.

By giving the subject of Brownian motion a more intense study than had ever been undertaken, Einstein was able to calculate the number of water molecules per square inch as well as to provide statistical and mathematical formulas for the motion. His theory was based on the assumptions that as small particles (such as pollen grains) move about in a liquid, which are being pushed about by much, much smaller atoms in every direction. Normally, there are roughly the same number of atoms on each side of the pollen grain, all pushing and bumping against each other in random directions; so naturally, such movement should tend to cancel each other out most of the time. Seeing as it truly is a random process, however, because of the tiny nature of particles overall it will be that the pollen grain is pushed a little bit more in one direction, so it moves that way, and then later is pushed in a different direction and moves another way. Here Einstein hypothesizes that it was not necessary to measure the velocity of the particle in question throughout its journey, but rather that, all that was needed was to measure the whole distance (and thus average velocity) the particle moved. This allowed other scientist to empirically prove atoms and Avogadro’s number.

“On the Electrodynamics of Moving Bodies" and “Does the Inertia of a Body Depend upon Its Energy Content?” (Due to the exceedingly mathematical nature of the above papers, the article will continue by using analogies to explain the underlying principles over the proofs themselves.) Special Relativity The theory of special relativity is concerned solely with objects moving at a straight line or a constant speed. The first postulate states that the laws of physics are unchanged regardless of an object’s relative motion. The second states the speed of light (c) does not change regardless of the relative motion of an object .It should also be noted that the theory holds that motion is relative to another frame, and that there is no absolute still reference frame. Thus the time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames; thus time can be dilated and is not absolute.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 13

Scope 2009/10 Physical Sciences

Therefore, we see that two events that occur simultaneously in different places in one frame of reference may occur at different times in another frame of reference. For example, if you were on a ship (at constant velocity) and dropped a ball from the mast, you would see it fall directly downwards. However, a bystander from the beach would see both the ball and the ship move forward until the ball hits the deck.

Time does not slow as speed increases. It only slows relative to another reference point. Objects do not shorten as speed increases, only relative to another reference point.Only crossing the speed of light barrier from either a faster or a slower speed is disallowed, it may still be possible to have greater speeds. E=mc2 (The mass energy equivalence) (Energy = mass x speed of light2) This formula was derived from KE = ½mv2 (kinetic energy = 1/2 x mass x velocity squared) and applied to the laws of special relativity. By creating a relative motion, and applying the hypothesis that all mass can be converted (theoretically to 100%) energy, one gets to Einstein’s most famous and elegant formulas.

Figure 2 (a) Shows ball moving relative to crewmembers reference frame.

Figure 2 (b) Shows ball moving relative to bystanders reference frame. Secondly, the theory shows that a body cannot move from a state of motion less than the speed of light, to one above it. Although with Newtonian mechanics an increase, or resultant speed, of an object can be determined by simple addition, this is not true in special relativity and thus acceleration does not break the first postulate. Also, fast moving objects will begin to distort and appear shorter to an observer, due to Terrell Rotation. Therefore, if one was on a spaceship travelling very near the speed of light, from a stationary bystanders point of view, the spaceship will become distorted in the direction of the movement (i.e. its will squash horizontally), however, to the pilot of the ship these apparent effects will not be noticeable, and he will see himself as normal sized, because he is moving at 0km/h relative to himself. According to special relativity, both viewpoints are correct. Whilst there are many strange effects one can see from relativity there are some facts to bear in mind:

This has a bearing on our knowledge of the speed of light. This is because when one applies this formula it becomes apparent that the energy required to overcome resistive forces begins to increase with an increased velocity. As an object accelerates close to the speed of light, relativistic effects begin to dominate. In particular, adding more energy to an object will not make it go faster as the resistive forces begin to increase until we reach an infinite energy stage. So the energy is added to the mass of the object, as observed from the rest frame. Thus, we say that the observed mass of the object goes up with increased velocity. Mathematically, by extending from force = mass x acceleration one ends up with the following equation; m0 m = ---------------√ (1-v2/c2) By looking at this we see that when v=c (velocity=speed of light) the mass relative to the rest frame (original mass = m0) must be infinite, thus impossible. The other big impact this equation has, is on nuclear physics. Einstein himself explains ‘It followed that... mass and energy are both but different manifestations of the same thing’. Now, since E = mc2, the energy equivalent of a very small mass can be very large indeed. Thus, in fission, when atoms are split, photons (in the form of gamma radiation) are emitted, in which a very small sample will give off a huge amount of energy. So, as one can see Einstein’s Annus Mirabilis papers can be seen as ‘’revolutionary’’ because they paved the way not only to a new perspective of physics, but created the conditions for a whole century’s worth of work and discoveries in the physical world, from Einstein’s later general relativity to Quantum mechanics.

13


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 14

Scope 2009/10 Physical Sciences

Quantum Of Solace: Heisenburg’s Uncertainty Principle Neeloy Banerjee “How can something come from nothing?” was the question posed by the mathematician and philosopher René Descartes when contemplating the origin of the universe. Indeed, it seems rationally illogical to assume that from a vacuum, mass appeared and for years philosophers of science wrestled with this question, struggling to make headway with it.

proved that Δp.Δq ≥ ħ . Now since ħ is such a tiny number, for most experiments this is of little consequence. But when performing quantum experiments, this inequality matters a great deal. Scientists were horrified at what they thought was an accusation made by Heisenberg that their equipment and techniques were not good enough for the experiments. In response, Heisenberg proposed a thought experiment. Heisenberg’s Microscope

Planck’s Constant Before we delve into the uncertain realms of quantum physics, let us first ground ourselves in the necessary physics ‘lingo’. Firstly, h, represents Planck’s Constant. This is roughly equal to 6.63 x 10^-34 Jouleseconds (Js). The history behind the discovery of this number would equate to another article, but in short, whilst studying blackbody radiation, Planck noticed discrete energy levels that corresponded to the formula E=hf, where E is the energy of the emission, f is the frequency of light produced and h, this new constant, linked the two. The units of h are Joule-seconds, Js, which makes for an interesting digression. The Joule-second is an action, which is a novel concept because it does not relate to anything physical (try thinking of something tangible that requires you to multiply energy and time). Instead it remains absolutely constant and has the same size for all observers in space and time. According to Einstein’s Special Theory of Relativity, any object in space – for example a line – will look different to observers that travel at different speeds relative to it. This line can be thought of as existing in four dimensions, and as it moves ‘through’ time it traces out a four dimensional surface, a hyper-rectangle whose height is the length of the stick and whose breadth is the amount of time that has passed. The area of the rectangle (length x time) comes out to be the same for all observers watching it, even though the lengths and times may differ. In the same way this metre-second is absolute in four dimensions, so too is the action. Diversions aside, the next thing the physicist requires is the understanding of ħ. ħ = h/2 (which is roughly equal to 10^-34) and it is an important number because of its application in the Uncertainty Principle. Heisenberg’s Uncertainty Principle

14

This principle states that, if you measure two properties, p and q, of, say, an electron – take for example momentum and position, then the uncertainty in each of those measurements are Δp and Δq. Heisenberg

In viewing any image of any object, we must fire photons at the object. Photons have no rest mass and only miniscule amounts of energy so when interacting with a larger object they have little effect. But when looking for the precise position of an electron, as we can see above, by firing the photon at the electron, we distort the very position we were trying to measure! Not to mention changing its momentum in the process. Implications of the Uncertainty Principle The implications of this principle are tremendous. Take any two properties of an electron, that when multiplied, have units Js (Joule seconds) and you will know that you cannot measure them exactly. As mentioned above, the momentum, position dilemma arises. In measuring more and more precisely, say, the momentum, the less precisely you measure the position. This actually has profound implications in that now even theoretically we cannot measure and therefore know the position and momentum of all the particles in the universe. Since, in order to predict movement, we need both position and momentum of things, this means that we cannot predict a particle’s exact future behaviour. Another two properties that we can measure in experiments are that of energy (E) and time (t). Since they multiply to give the units of action (Js) they can be incorporated into the Uncertainty Principle. ΔE.Δt ≥ ħ . Looking at this more carefully can reveal a startling picture. We can look at this in two ways. Firstly, let us assume we are trying to measure the energy as precisely as possible. The more precise we become, the less precise we have to be with measuring time. The consequences of that are that we don’t know whether, during our experiment, time has travelled uniformly. A positron travelling forwards in time is equivalent to an electron travelling backwards in time and this kind of time flexibility could give rise to particles travelling backwards in time during the experiments and we would know nothing

about it. Secondly, this time we want to measure the time as precisely as possible. This time, ΔE is much larger. This theoretically means that the energy could fluctuate in the time period specified. And since Einstein confirmed that energy is equivalent to mass, this means that there could be a fluctuation of mass in the time period. To further expound this point, take an ‘empty’ space of a vacuum and measure a time period very precisely (say, where Δt ~ 10^-44, requiring ΔE to be ~ 10^10). In this time period that means that potentially, energy, and thus mass has been created from conditions without energy in the first place! “How can something come from nothing?” asked Descartes, many years ago. Well, using Heisenburg’s work, scientists today have proved that it can.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 15

Scope 2009/10 Physical Sciences

Atmosphere: Earth’s Great Defence Manesh Mistry “Captain Joseph W. Kittinger, Jr. of the US Air Force is the man who fell to Earth and lived. Only twenty miles above our heads is an appalling hostile environment that would freeze us, and burn us and boil us away. And yet our enfolding layers of air protect us so completely that we don't even realize the dangers. This is the message from Kittinger's flight, and from every one of the pioneers who have sought to understand our atmosphere. We don't just live in the air. We live because of it.” So writes Gabrielle Walker in “An Ocean of Air: A Natural History of the Atmosphere”. Engrossed by the prologue of this book which told the story of Kittinger’s fall from the heavens, I decided to read on. Kittinger's calculated risk was only one of a series of experiments carried out over several centuries as human beings attempted to understand the ocean of air that surrounds us and supports our existence. Elements of the Air

Priestly and Lavoisier had also inadvertently isolated the major ingredient of our atmosphere: nitrogen. Priestly correctly said that we would all be “living out too fast” if only oxygen was present in the air. Oxygen and nitrogen make up the vast majority of the air we breathe but there was another gas yet to be discovered which is responsible for every scrap of food on Earth. In the 1750s, Joseph Black stumbled across a gas he dubbed “fixed air”, which we know as Carbon Dioxide. He found limestone reacted with acids to yield this gas which was denser than air. The gas did not support life or flames. He also demonstrated that the gas was produced by animal respiration and when charcoal was burned. Theses reactions are shown below: 2H+(aq) + CO32-(aq)

Robert Boyle first carried out the famous experiment to show that sound cannot travel through a vacuum by placing a watch in a jar and pumping the air out of the jar (the hands keep moving but the ticking sound disappears as air molecules are forced out). Boyle also observed that a flame needed air to burn and that life could not survive without it. This laid down the first tentative steps to understanding that there were constituent parts to our atmosphere, rather than a large abyss. Joseph Priestly is the man credited with first discovering oxygen in 1744. Priestley used a magnifying glass to focus the sun's rays on a sample of the compound mercury (II) oxide: 2HgO(s)

Lavoisier also demonstrated the role of oxygen in the rusting of metal, as well as oxygen's role in animal and plant respiration. Lavoisier conducted experiments that showed that respiration was essentially a slow combustion of organic material using absorbed oxygen.

H20(l) + CO2(g) C(s) + O2(g)

CO2(g)

6O2(g) + C6H12O6(s) 6H20(l) + 6CO2(g) Carbon Dioxide is both a crucial and dangerous element of the air. Along with oxygen and nitrogen, it helps transform the lump of rock which is our Earth, into a living, breathing world. We need it for food and warmth but we abuse it only at our peril. At no point in the last 400,000 years have CO2 levels been anywhere near where they are today (Fig.1).

John Tyndall explained the heat in the Earth's atmosphere in terms of the capacities of the various gases in the air to absorb infrared radiation. Tyndall was first to prove that the Earth's atmosphere has a Greenhouse Effect and he showed how CO2 plays a big part in this. The sun's energy arrives on the ground as visible light mostly, and returns back up from the ground as infrared energy mostly, and he showed that water vapour, CO2 and some other gases substantially absorb infrared energy, hindering it from radiating back up to outer space. Temperature and CO2 levels are clearly linked as shown by fig.1: Breezing past There is the story of Christopher Columbus’ voyage to the Americas and his equally significant (and inadvertent) discovery of the Trade Winds. These were only truly explained some 350 years later by William Ferrel. He demonstrated that it is the tendency of rising warm air in the northern hemisphere, as it rotates due to the Coriolis Effect, to pull in air from more southerly, warmer regions and transport it poleward. It is this rotation which creates the complex curvatures in the frontal systems separating the cooler Arctic air to the north from the warmer continental tropical air to the south. Finally there is the story of the great aviator, Wiley Post, who discovered what we now call jet streams which are fastflowing rivers of air which circle the world in both hemispheres. The global winds have great significance as the redistributors of both heat and water which are both indispensible to life; without its effects we would not survive. Fragile Cradle So far, we have seen how air has provided life on Earth and allowed it to flourish. However, our atmosphere also serves to protect us against the hazards of space.

2Hg(l) + O2(g)

He discovered that heating this compound produced a gas in which a candle would burn more brightly. This gas was oxygen. Although Priestley could not accurately interpret these results using the scientific knowledge of the time, his work was later used by Antoine Lavoisier. Lavoisier showed that the mass gained by lead when it forms its oxide is equal to the mass of the air lost. He showed that the remainder of the air turned out to be incapable of supporting further burning; hence proving that part which had reacted was different. Lavoisier had discovered that common air was not a single, indivisible element.

Figure 1 CO2 levels (ppm) and temperature change in the last 400,000 years 15


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 16

Scope 2009/10 Physical Sciences

Above the clouds, layer after layer of air protects us from space. The very first of these protective layers was nearly destroyed before we discovered it. Ozone W.M. Hartley was curious about a recently discovered gas, ozone (O3). A puzzling issue of the time was that ultraviolet (UV) rays with wavelength less than 293nm did not arrive at the surface of the Earth while visible light and UV rays with wavelength between 400 and 293nm did. Hartley noticed that ozone gas has a tendency to absorb low wavelength UV rays and showed that it was a layer of ozone which prevented these high energy (energy is proportional to frequency (E=hf); low wavelength = high frequency for EM waves) UV rays reaching the ground. If these waves reached the ground they would weaken the human immune system, cause skin cancer and eye cataracts and destroy algae which our vital to food chains. In 1930, General Motors charged Thomas Midgley with developing a non-toxic and safe refrigerant for household appliances. He discovered dichlorodifluoromethane, a chlorinated fluorocarbon (CFC) which he dubbed Freon. CFCs replaced the various toxic or explosive substances previously used as the working fluid in heat pumps and refrigerators. CFCs were also used as propellants in aerosol spray cans and asthma inhalers. Until the 1970s, CFCs were seen to be a practical solution to the refrigeration puzzle but this view was destroyed by the work of several scientists who began to unravel what happened to CFCs as they rose through our atmosphere. The CFCs would rise safely until the altitude of ozone molecules until they became exposed to the high energy UV rays which ozone is trapping. A chlorine atom causes devastation here when it is released by a UV ray. Chlorine radicals form which effectively deplete the ozone layer by the following overall reaction: 2O3 (g)

3O2 (g)

According to complex calculations, one chlorine atom could destroy 100,000 molecules of ozone. A worldwide ban of CFCs was demanded by the two scientists, Molina and Rowland, who had made the shocking discovery that 30% of the ozone layer would be depleted by 2050. However, the industry was unlikely to fold so easily, and it needed a hole in the ozone layer the size of the USA to gradually emerge over Antarctica to cause a change in the law. This was discovered over a decade after Molina and Rowland’s discovery was made public and was the final nail in the coffin for those backing CFCs. A worldwide ban on CFCs wasn’t enforced until as late as 1996. Up, up and away! Some 100km above the Earth’s surface, the air crackles with current in the ionosphere. 16

This high electrical layer soaks up rays from space so deadly that life could not exist below without it. Without realising it, Marconi took advantage of this layer of air to transmit his first messages across the Atlantic Ocean. Marconi’s waves allowed the world to know the fate of the Titanic and how everyone on board would have died without them. The waves were explained by Heaviside who applied the fact that wireless waves would be reflected by something that conducts electricity. Although the air is very thin at the height of ionosphere, a few molecules of gas do exist which can be ionised by cosmic rays. This leaves a spray of positive and negative shards which would allow the air to become “electrical”. This reflecting mirror in the sky is now called the Heaviside layer in honour of its discoverer but its secrets were unveiled by Edward Appleton. Appleton had observed that the strength of a radio signal from a transmitter was constant during the day but that it varied during the night. This led him to believe that it was possible that two radio signals were being received. One was traveling along the ground and another was reflected by a layer in the upper atmosphere. The variation in strength of the overall radio signal received resulted from the interference pattern of the two signals. To prove his theory, Appleton used the BBC radio broadcast transmitter at Bournemouth. This transmitted a signal towards the upper reaches of the atmosphere. He received the radio signals near Cambridge, proving they were being reflected. By making a periodic change to the frequency of the broadcast radio signal he was able to measure the time taken for the signals to travel to the layers in the upper atmosphere and back. In this way he was able to calculate that the height of the reflecting layer was 60 miles above the ground. The final protective layer of the atmosphere is driven by both the electricity of the ionosphere and the magnetism above it. Thousands of kilometres above the Earth’s surface are sweeping lines of force from the planet’s magnetic field which protect us from radioactive space. This layer is also responsible for the formation of the aurora borealis (Northern Lights) and ties into the complex physics behind van Allen belts which ultimately protect us against the dangerous radiation from space. If you cannot believe that air, too thin for us to breathe, is a strong enough to defend the planet, then may we remind you of the events of October 2003. Don’t remember it? You’re lucky you don’t. If you did remember it, it wouldn’t be for very long, as you and the rest of organic life, would have been immediately consumed by the output equivalent of 5,000 suns’ worth of x-rays, without our atmosphere’s blanket of protection. “In October 2003, a series of explosions rocked the outer surface of the sun. A massive flash fried Earth with x-rays

equivalent to 5,000 suns. A slingshot of plasma (ionised gas) barrelled towards us at 2 million miles an hour. The most massive flare since records began and one of the biggest radioactive maelstroms in history together met a far more formidable foe. They each arrived, and then, one by one, they simply bounced off… thin air.”


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 17

Scope 2009/10 Physical Sciences

The Path Towards Finding The Magnetic Monopole Wajid Malik Magnetism is one of the most commonly known forces on the planet; its discovery dates back many thousands of years. Magnets are seen in a wide variety of everyday locations, from electric bells to particle accelerators, but these magnets all exist as the classical idea of a magnetic dipole. The concept of a magnetic monopole is one that has been extensively theorised; however these rare particles have yet to be discovered. The Fundamental Forces There are four fundamental interactions of nature (ways in which particles interact with each other): The strong interaction, the weak interaction, gravitation and electromagnetism. Electromagnetism concerns the electromagnetic field, in which relatively moving electrical and magnetic fields induce each other. From this it has been deduced that electricity and magnetism are interlinked, which implies that properties exhibited by one should be apparent in the other. This causes a problem when it comes to the magnetic monopole; single electrical charges can easily be isolated and are abundant in the world around us (take the proton or the electron, for example) but magnetic poles are only found in North and South pairs. This fuelled the search for the magnetic monopole; a freemoving magnetic ‘charge’ that does not cancel out with the opposite pole.

Figure 1 (above) Diagram showing the Electric (E) and magnetic (B) field lines generated by monopoles and by their motion with velocity v. (a) Electric monopole with electric charge e. (b) Magnetic monopole with magnetic charge g.

The importance of their existence One of the largest tasks in science is describing how the universe began. The Big Bang is the generally accepted cosmological model but it needs to be developed and built on by other theories to fully explain how it all came about. An area of theoretical physics that is currently being researched is that of unified field theories. The idea of this has been around for some time, but there is no accepted theory to date. Magnetic monopoles become very important when it comes to Grand Unified Theories. These theories predict that at very high energies (such as those around the time of the Big Bang), three of the four aforementioned fundamental forces: the electromagnetic force, the strong nuclear force and the weak nuclear force are combined into a single field. It is hypothesised that from there they splintered off into the separate forces. However a necessary factor in making this work is the existence of the magnetic monopole; this type of monopole, known as the GUT monopole, is thought to be very rare and massive, making both its discovery and its creation (in a particle accelerator) very unlikely. The existence of monopoles also has implications in superstring theory and would help to develop this further.

field like that of a monopole was observed at the end of the strings. The observations made during the neutron scattering experiment show properties resembling monopoles; however the actual monopole particle has not been isolated in the same way as, say, an electron can be. The results of this experiment do provide substantial evidence that monopoles do exist; however one thing is for certain; the search is not yet over.

Observation of the monopole The numerous failed attempts to successfully detect the magnetic monopole made its discovery seem ever more improbable; however on the 3rd September 2009 scientists from the Helmholtz Centre Berlin claimed to have successfully observed magnetic monopoles for the first time. Their experiment involved using the process of neutron scattering to make the orientation of the Dirac Strings able to be determined. Neutron scattering is the process of using neutron radiation (a stream of neutrons) to interact with another substance, which in this case was a crystal of dysprosium titanate (Dy2Ti2O7). This is a material known as a “spin ice”, which essentially has the same atomic arrangements as water ice, but in this case has certain magnetic properties.The crystal was cooled to temperatures between 0.6 and 2 Kelvin and the neutron scattering experiment was begun. Since the neutrons carry a magnetic moment, they interacted with the Dirac strings and scattered in a way that represented the arrangement of these strings in the crystal. Using a strong magnetic field, the orientation of the string network could be manipulated to favour the observation of a monopole, and eventually a

Figure 2 (below) Schematic diagram of the neutron scattering experiment: Neutrons are fired towards the sample, and when a magnetic field is applied the Dirac strings align against the field with magnetic monopoles at their ends. The neutrons scatter from the strings providing data which show us the strings properties.

17


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 18

Scope 2009/10 Physical Sciences

Chemiluminescence Wei-Ying Chen Like anything that lights up, the science behind chemiluminescence comes under the natural phenomena of luminescence. In the simplest sense it is a transfer of energy, from chemical, thermal or electrical into radiant energy, (a source of energy which causes the atoms of a substance to vibrate). If enough energy is delivered, the electrons of the atoms can be excited to a higher energy state, and when it falls back down to their original state, energy is released as light photons with energy proportional to their frequency. In some cases this frequency is within the range of visible light and we can see the substance glow a certain colour. In chemiluminescence, the atoms are excited when chemical compounds react exothermically, releasing a large amount of energy. Luminol Reaction The most basic reaction that exhibits chemiluminescence is that of luminol and hydrogen peroxide: Luminol + H2O2 → 3-APA (3aminophthalic acid) → 3-APA + light Luminol is first activated in a solution of hydrogen peroxide and a hydroxide salt in water. In the presence of a catalyst such as an iron compound the hydrogen peroxide is decomposed: 2 H2O2 → O2 + 2 H2O Dianion is formed when luminol reacts with the hydroxide, which then reacts with the oxygen produced from the hydrogen peroxide. The product is an organic peroxide that is highly unstable and immediately decomposes to produce 18

3-APA in an excited state. When the electrons then fall back to the ground state, photons are emitted and the solution appears to light up. This is the process that you often see on CSI when they find traces of blood. Investigators will spray a solution of luminol with hydrogen peroxide onto the floor, and the iron present in the haemoglobin in blood is enough to catalyse the reaction and cause the blood traces to light up. Glow sticks use a similar principle to the luminol reaction, however the main difference is that the decomposition of the chemical doesn't produce an excited chemical, but instead gives out energy to excite a dye. This has many advantages, the main being that the dye can be specifically chosen to emit a particular colour. This reaction is actually the most efficient chemiluminescent reaction known, up to 15% quantum efficiency. In theory, one photon of light should be emitted for each reaction. However in reality, nonenzymatic reactions seldom exceed 1% efficiency. Other factors such as pH and temperature also affect the quality and intensity of the light produced, however the reason why is too complicated to go into. Uses of chemiluminescent chemicals Unfortunately we don’t know the complete reasons as to why chemiluminescence occurs. However, scientists have discovered a green fluorescent protein (GFP) from jellyfish. The protein is able to attach to other molecular structures, allowing us to see various functions of the cell and how proteins interact with each other. In more modern times, the need to

measure the precise concentrations of solutions is growing, therefore it is no surprise that chemiluminescence is beginning to play its part in the use of measuring quantities. An immunoassay is a biochemical test that measures the concentration of a substance in a biological liquid and with technical demands of these assay markets rising, the requirement for highly sensitive detection technologies is vital. Chemiluminescent processes are sufficiently sensitive, not prone to interference and easy to use therefore they seem perfect for these applications. The measurement of light intensity is fairly simple and since there is barely any background light produced from the sample, the ability to provide a large range of measurements is possible, even with simple instrumentation. However there is often some difficulty in choosing the correct detection reaction. Scientists are currently using methods of covalently labelling one of the substances that are being analysed with a chemiluminescent chemical. By triggering the chemiluminescent label to undergo the light-emitting reaction, a signal will be produced for detecting the substance. There are not many suitable chemiluminescent compounds in existence, which is why the research into these chemicals is a growing field, and at the moment, scientists are trying to find reactions which demonstrate an adequate chemiluminescence quantum yield for good efficiency.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 19

Review

19


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 20

Scope 2009/10 Review

The Rowboat’s Keeling Ameya Tripathi On a summery lunchtime two julys ago, in a drowsy classroom, I attended a presentation of the nascent Haberdashers Geographic Society. It was being addressed by a formidable speaker: Mark Maslin. Like many of his colleagues, Professor Mark Maslin is possessed by global warming. Though he is an energetic and youthful man, his dark hair has become speckled with slivers of silver, the like of which can only have been brought about after examining portentous climate models. Nevertheless, he speaks with a cadence and clarity that evince a sharp mind. Maslin does far more than speak to students; as the director of the Environment Institute of University College London, he is one of the leading climatologists in the world. A prolific writer in journals such as Science and Nature, Maslin has long been a passionate advocate for changing our ways. In an effort to advance the issue, he attends a massive annual convention at that Mecca of conference centres, Las Vegas. This is where a whole group of scientists converge and every year declare to the whole world - politicians, sceptical oil barons, grumbling developing countries and eager ecomentalists, roughly the same message: firstly, there is a consensus that global warming is caused by humans, and secondly, that we’re about to enter a geological phase called the ‘Anthropocene’ and everything is going to fall to pieces; hurricanes, flooding, droughts, heatwaves (not just the 29° ones you scoff at) and the door to Bedlam shall generally be poked ajar, swung off its hinges, and burned as firewood, belching yet more noxious vapours into the atmosphere. The New Scientist featured a rather spooky and alarmist feature which furthers this message, entitled ‘Earth 2099: Population Crashes, Mass Migration, Vast New Deserts, Cities Abandoned’. Perhaps the most foreboding part of the front cover of that particular issue is the next subtitle - ‘How to survive the century’. While somewhat fatalistic, it increasingly represents the view of many scientists, such as the fondly nicknamed ‘Grandfather’ of global warming, James Hansen, the head of the NASA Goddard Institute for Space Studies. Recently, he was profiled by eminent, authoritative and lucid writer Elizabeth Kolbert (who has written the absolute must read: Field Notes From A Catastrophe) in an aptly titled piece, ‘The Catastrophist’. She and Hansen form part of a growing number of observers who believe the situation to be close to irrevocable. She writes in Field Notes the following about our ability to prevent warming: 20

“Perovich offered a comparison that he had heard from a glaciologist friend. The friend likened the climate system to a rowboat: “You can tip it and just go back. And then you tip it and you get to to the other stable state, which is upside down” . Just in case you thought it was so close to irrevocable as makes no difference, there are some scientists who are slightly less gloomy, perhaps because they’re not as battle-scarred and wisened about the ways of industry and politics, perhaps because they’re more audacious. One of those people was Professor Maslin, who seemed intent on not toeing the ‘We’re all going to die’ line. The reason these scientists are silent isn’t because they’re afraid, or apathetic; rather, it is because they’re all far too busy arguing amongst themselves. There is a great debate raging between and within the worlds of science and politics. What is being determined is what type of mitigation they wish to pursue. They are debating over whether geoengineering, or a type of mitigation which I shall label anthropogenic engineering, is the best way forward, or a combination of the two, or an entirely different concept, adaptation. These are fairly exact concepts, so let us define them accurately. Adaptation, in the lexicon of climatology, is all about survival. Tsunami shelters, sunscreen that can block ultraviolet rays, extraterrestrial colonies, renovating the Thames Barrier, evacuations - all the sort of last resort stuff which we do when we realise we can’t actually stop it; the rowboat has capsized. ‘Anthropogenic engineering’ is about changing human behaviour i.e. reducing reliance of fossil fuels, replanting the rainforest, banning ozone eating chloroflourocarbons, cycling to work like Mr. Boris Johnson, holding onto our polyethylene bags, buying a Prius, not indulging in air travel to destinations we could amble to etc., etc. It’s the tough one which no one really likes, as it’s a fundamental, systemic change in the way we have functioned for centuries. Especially opposed to the proposition are China, and India, and other developing countries who hold the dubious honours of being some of the biggest emitters in the world (They do claim that if one were to examine emissions on a per capita basis, they are squeaky clean). This ‘anthropogenic engineering’, is firstly expensive - no one wants to buy photovoltaic cells and adorn their roof tiles with them if they’ll only pay for itself long after the Earth is roasting in an oven of carbon dioxide and self-pity - and secondly, seems to be geopolitics’ Gordian knot as demonstrated by the pitiful failure of the 1997 Kyoto Protocol.

The sexier option, the ‘have your cake and eat it too’ option, is a type of mitigation called geoengineering. Instead of changing human behaviour (or in conjunction with it), why not change the behaviour of the Earth’s systems? Tinker with wind patterns, solar fluctuations, migrations, forestation? It seems like the role of a Bond villain cast in an episode of The Simpsons. The image that first came to my mind when someone told me this is of someone putting a giant lampshade around the sun. I found that image rather absurd (perhaps in part, because the sun was wearing sunglasses), yet one of the ideas at the heart of geoengineering isn’t actually that far off. It involves placing a massive, reflective sunshade in orbit, using satellites and a more advanced version of tin foil, comprising 16 trillion small reflective disks, at an altitude of 1.5 million kilometers, with a diameter of 1800km. All of this would result in a deflection of 2% of the sun’s long wave radiation, enough, scientists claim, to stop global warming. Whilst that might sound expensive, this singular solution, costing some tens of billions on dollars, is probably a lot cheaper than changing our whole comfortable lifestyle and culture of disposable fashions and general consumption. There is a nuance to be considered; this obviously will have a negative effect on say, photovoltaic cells - a good example of why combinations of the two approaches tend to be tricky. Moreover, a recent article by the British Antarctic Survey noted that the reflective sunshade would still climate change, due to the different temporal and special forcing of increased CO2 compared to reduced CO2 radiation. These include: significant cooling of the tropics, warming of high latitudes and related sea ice reduction, a reduction in the intensity of the hydrological cycle and an increase in Atlantic overturning. In addition, the reflective sunshade fails to mitigate certain causes such as ocean acidification. The other oft-discussed geoengineering method is founded on a principle called The Pinatubo Effect. In 1991, Mount Pinatubo, a massive active stratovolcano in the Philippines erupted ultra-Plinian style (it ejected a gaseous plume that was tall enough to extend to the stratosphere, akin to the eruption of Mount Vesuvius, or Krakatoa). The inhabitants had thankfully been evacuated and the situation was safe. Not so thankfully, the volcano had ejected approximately 15 million tons of sulphur dioxide. The sulfur dioxide rose into the stratosphere, where it reacted with water and formed a hazy layer of aerosol particles comprised largely of sulfuric acid droplets. Over the next two years, strong stratospheric winds spread these particles around the globe. Unlike the lower atmosphere (or troposphere, which extends from the surface


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 21

Scope 2009/10 Review

to approximately 10km), the stratosphere doesn’t have rain clouds as a mechanism to quickly wash out pollutants. As a consequence, the heavy influx of aerosol pollutants from Pinatubo remained in the stratosphere for years until the processes of chemical reactions and atmospheric circulation filtered them out. The result was a measurable cooling of the Earth’s surface for almost two years. Why? Steven Platnick of the NASA Langely Research Center explains:

There are other methods for geoengineering too. Perhaps two years away from production are a series of carbon scrubbers, a type of synthetic tree designed to purify air, being produced at Columbia University. These could take one ton of CO2 out of the air per day. Smaller than a standard shipping container in size, and at about $200,000 in price, these carbon scrubbers trap CO2 entering them on an ion exchange resin. The CO2 then can be either buried or used in other ways.

“Because they scatter and absorb incoming sunlight, aerosol particles exert a cooling effect on the Earth's surface. The Pinatubo eruption increased aerosol optical depth in the stratosphere by a factor of 10 to 100 times normal levels measured prior to the eruption. Consequently, over the next 15 months, scientists measured a drop in the average global temperature of about 1°F (0.60C)”.

Another idea is to fertilize trees with nitrogen. The idea here is said easily enough: fertilize trees with nitrogen to stimulate their ability to absorb more carbon dioxide and, by increasing their albedo, reflect more solar radiation back into space. Voila! You’ve begun cooling the planet. Not so fast says climate issues writer Jeremy Elton Jacquet of Treehugger:

What does this mean? Firstly, it caused a sudden blip in a sequence of the hottest years on record. Note the flattening of the curve on the Mauna Loa graph above 1992. Secondly, it gave geoengineers an idea: to ‘seed’ the atmosphere with sulfur dioxide aerosol particles. Rather confusingly, the sulfur dioxide also did tear open the ozone hole even further, but, they insisted, the average temperature did cool down for a short period of time. So is it feasible? It has been estimated that seeding the atmosphere with sulfur dioxide would cost $100 billion , trump change compared to the recent $800 billion economic recovery package in the United States. Why not do it? I asked a similar question to Professor Maslin, ‘Is geoengineering, such as planting the atmosphere with sulfur dioxide, the stuff of Hollywood, or is it tangible?’. At the time I was left feeling a little underwhelmed. Here’s a paraphrasing of what he said: Well, while it is agreed that geoengineering would reduce global temperatures on a macro level, we’re not really aware of the micro effects: what would happen in each country, whether it would be politically just; whether there would be mass warming in one place to compensate for mass cooling in the other. Averages are rather crude, really. What he seemed to suggest was, ‘We’re not sure about this whole geoengineering thing, so let’s wash our clothes on 30° instead.’ (This is a crude stereotype, but it demonstrates the notion of geoengineering trumping anthropogenic - or ‘gesture’ engineering). What he was actually referring to were these concepts like sulfur dioxide causing ozone depletion, ecological issues, etc. in addition to the politics of it. He just didn’t have the time to explain all of these shortcomings.

“Even if the nutrient does act as a switch that changes the leaves' structure to increase their albedo, only certain species would be able to take advantage of this property... if we wanted to apply this method on a sufficiently large scale to effect carbon emissions, we would have to plant entire forests made just out of those few species... all the environmental downsides associated with high nitrogen concentrations: nitrous oxide emissions... groundwater contamination and drying (trees that consume larger amounts of nitrogen need more water), just to name a few.” Aerial reforestation is another idea; putting seeds in biodegradable shells and dropping them out of a Cessna. This was tested on a Discovery Channel documentary, Project Earth, but unfortunately, rates of successful germination were very low. It seems like getting out a spade and planting trees is the answer, but doing that in the depths of the sweltering Amazon rainforest may not be reasonably practicable. These are just some of the big ideas being floated, but what is immediately apparent is that geoengineering is still very much up in the air and anthropogenic engineering remains tricky and expensive (plus, you know, we don’t really care for it). It seems like humanity is stuck between a rock and a hard place, and the words of doomsayers such as Hansen and, to a lesser extent, Kolbert, are worth heeding. So I’ve just ruled out our ways out beating global warming, and told you that we should listen to the catastrophists and the question facing us is the one on the New Scientist cover, ‘How to survive the next century’ rather than the question that we want to be answering - ‘How to stop global warming’. A question of adaptation rather than mitigation. Sounds like a rather gloomy analysis. There’s something missing, though. There is

one glimmer of hope. I looked at why geoengineering isn’t there yet and so, in short, it is currently a non-option. But the changing of our behaviour, anthropogenic engineering, is. Why? Well, geoengineering, that form of Blofeldesque planet tinkering, has only been around for a few years. Anthropogenic engineering had been around for decades and has had a proper chance to develop into a fully fledged cooling behemoth. Some people call this ‘Gestureengineering’ because they think our behaviour doesn’t matter (in any case, the Chinese will be polluting ten times of what I recycle anyway, right?). But it’s not true. Guffawing about what will happen to your solar powered car when it goes into a tunnel is no longer a terminative rebuff - as shown by the phenomenal Tesla Roadster and the general improvement of renewable energy technology. Rapidly, renewables, recycling and all of the rest of it are dispelling long held stereotypes. Photovoltaic cell efficiency, for example, has rocketed from a record level of 17% in 1992 to 42.8% today, which isn’t that far off the photosynthetic capabilities of the leaves of some plants - this means they pay for themselves faster, and are overall, cheaper. Another fine example is that of CFCs. The reduction of chloroflourocarbon and refrigerant use yielded a 30% shrinkage of the ozone hole in area between 2006 and 2007 - a tangible example of anthropogenic engineering actually working. The Honda FCX Clarity, a car powered entirely by hydrogen, proves to be another example of the rapid advancement of energy production. The Clarity works in a very simple way. At the back of the car is a fuel tank, exactly where you expect it to be, but instead of it being filled with diesel or petrol, it is filled with compressed hydrogen. This hydrogen is combined with oxygen from the air in the fuel cell. In the fuel cell, an adapted form of electrolysis generates electricity: protons are conducted through a polymer electrolyte membrane, separating them from electrons which follows an external circuit to a cathodeThis electrolysis is used to drive the electric motor, which is all controlled by an onboard computer. When the Clarity runs out of juice, one just pulls into a hydrogen fuel station. This avoids all of the plug socket gimmickry of previous electrical cars. Admittedly, there are very few hydrogen power stations, so in the meantime, Honda has built a Home Energy Station which reforms natural gas (and electricity from your socket) to create hydrogen. Hydrogen costs roughly the same as petrol and yields you 270 miles of driving for a 2 minute stop at the pump, quite unlike the twelve hour charging from a plug socket that the previous generation of green cars required. 21


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Scope 2009/10 Review

The crucial factor is this: while not technically ‘renewable’, hydrogen is the most abundant element in the universe. Even better, the only pollution comes in form of trickling water droplets out of the exhaust, from the combination of hydrogen and oxygen. Another exciting and improving area is wind turbine technology. The trend of bigger rotors is better persists, as the longer the blades, the larger the area ‘swept’ by the rotor and the greater the energy output. In addition, there are many different turbine designs so there is scope for yet more innovation and technological development. For some people, it is not enough to see that renewable energy has tangible results. It has to be profitable, too. Jiang Lin, who heads up the China Energy Group at the Lawrence Berkeley National Laboratory, said that solar thermal is likely the most promising technology in the entire alternative-energy field. When asked when solar thermal can hit parity with fossil fuels, Lin responded ‘now’. These solar thermal plants, at their maximum, have yielded 354 megawatts of energy because they’re built over relatively small areas. If they generate more than 500 megawatts of energy, (by economies of scale) it will cost less than 10 cents per kWh. This is considered to be the magic number, at which all industries will switch to solar, rain or shine, as it compares at least equally, if not favourably, with certain fossil fuel costs. So changing our human behaviour is sort of there; it has shown tangible results, and it is almost economically viable (at least, in the case of photovoltaic and solar thermal energy). Economic subsidisation from the American Reinvestment and Recovery Plan, cap-and-trade schemes (successfully trialled in Denmark) and private funding as investors seek to dominate a future market should make the adaptation of human behaviour even more economically viable. As well as that, the recent UN meeting and the Copenhagen summit of December indicate a real desire to make a difference; President Hu Jintao’s recent overtures could prove telling. It seems like our own adaptation is a slightly less heady dream than the untested schemes of geoengineering. That ‘Gestureengineering’ tag can bugger off, then. The crucial point is this: mitigation, in the form of modifying our own behaviour, as opposed to the behaviour of the planet’s systems, is better established and improving rapidly, and therefore the former has a bigger chance of succeeding than the latter. So we’ve reviewed anthropogenic engineering and geoengineering, and I’ve suggested anthropogenic engineering looks the more efficacious a solution. Whatever methodology you think works best, there is one crucial part of the global warming issue, to return to the very beginning. Instead of talking about it, and writing and reading about it, or having meetings about it and coming out with the same consensus, we ought to do something. 22

There is hope the world’s policy makers will be proactive in Copenhagen, later this year, when they form a new climate treaty, if it isn’t aggressively whittled down by opponents to the point of being ineffective as seems to be the case with the Waxman-Markey Climate Bill that narrowly passed in the U.S House of Representatives. Why should we do something? Why should we care? It’s not happening, to me, right now, is it? It certainly doesn’t seem dangerous, and you know, polar bears had it coming. This is a tricky dilemna to resolve. At what point have we superseded what climatologists call DAI (Dangerous Atmospheric Interference)? As of March 2009, carbon dioxide in the Earth’s atmosphere is at a concentration of 387 ppm by volume . What constitutes too much? Aren’t catastrophes like Katrina and flooding in Bangladesh evidence enough? Is it 350ppm, as Hansen claims at his new campaign, 350.org? Is it 500ppm, or 550ppm, as the Chinese might like to lead us to believe? On an apparently unrelated topic, the great Sir. Donald Bradman said this of chucking in cricket:

“It is the most complex problem I have known...because it is not a matter of fact but of opinion and interpretation. It is so involved that two men of equal sincerity and goodwill could take opposite views.” This ambiguity is part of what lies at the heart of the inertia. Maybe we won’t know when it is dangerous until it slaps us in the face - the slapping hand coming in the form of an apocalyptic wave of Arctic meltwater. Wired News makes the point to Elizabeth Kolbert that things getting warmer sounds quite nice as a notion. 2-3° degrees warming seems harmless, even pleasant - not dangerous! We’ll be able to grow vines for wine in here rather than buying the French stuff, and, not have to trek to Ibiza to holiday. WN: Isn't part of the problem that people associate ‘warm’ with comfortable? Kolbert: “People think, "I won't have to go to Florida anymore. Florida will come to me." People should realize that warmth doesn't mean Florida. It means New York is underwater. It may be that certain places like Siberia are more comfy, but it also means that they have no water. If people say, "Why should I be worried about global warming?" I think the answer is, Do you like to eat?"

I bet you do like to eat.

Page 22


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 23

Scope 2009/10 Review

An Interview With Simon Baron-Cohen Scope: What do you recall about Haberdashers' from the days you were here? SBC: Haberdashers clearly aimed at excellence and encouraged independence of thought. Sure it was traditional, but it also held up on a pedestal those individuals who were creative in any way. it offered opportunities if students wanted to take them. I remember singing in the school choir, acting in school plays, contributing to the school magazine, going to after-school clubs in sculpture and carving in the art department, and heading down to Tykes Water during the lunch hour for lots of fun! i have remained good friends with a handful of people from my year at school and we meet for regular reunions. i spent from 1967-77 in Haberdashers, during which there were significant changes in society, and the school had to adapt to these social changes. Scope: Can you explain your 'Theory of Mindblindness'? SBC: Theory of mind is also called the capacity for "mindreading" and "mindblindness" is the flip side of the coin. It describes individuals who cannot mindread others, who have difficulties in using a theory of mind. I argued back in 1985 that children with autism suffer from degrees of mindblindness, which may explain their social and communication difficulties, and this has been amply supported by experimental evidence. Scope: Your most recent work published in the British Journal of Psychiatry examined the effect of Foetal Testosterone on normal children. Given testing the supposed link, between FT and autism, touted by the media, would require large study samples (as autism occurs in only 1 % of the population), are there any other methods, apart from such studies which could substantiate the theory? SBC: The best test of whether FT is elevated in autism would come from a large sample and fortunately we have access to such a large sample, through the Danish Biobank, who hold 70,000 samples of amniotic fluid. From these it is possible to identify 400 children who went on to develop autism, so by the end of 2009 we will have tested these 400 samples of amniotic fluid (against matched controls) to test the FT theory. Scope: Your views on prenatal testing for autism have been much publicised, have they changed at all? SBC: My views have been variously misreported! But in brief, I think prenatal

testing for autism raises the same ethical issues as prenatal testing for any medical condition and we should recognize it could lead to a form of eugenics, under a different name. i think autism is a very wide spectrum, and whilst some individuals with autism have many severe disabilities (including learning difficulties and epilepsy), others have social difficulties alongside unusual strengths or talents. i am pro-diversity and would like to see a society in which children with autism receive support with their social difficulties but where their talents can blossom. Scope: Clearly people with mental illness can contribute positively to society, for example David Horrobin claims that 'schizophrenia shaped society'. Do you believe a change in public perception regarding mental illness is required? SBC: I think we need to de-stigmatize mental illness, but equally psychiatry is itself is still in its infancy.

Scope: Has there been any significant research to contradict the view that autism is just one extreme form of the male brain? SBC: Not yet, though the hypothesis remains to be fully tested at the neural level. Scope: You were involved in the McKinnon case, why do you think law courts rejected his appeal?

Scope:You have written in the New Scientist about misrepresentation of your views. What do you make in general of the developing trend of trying to increase public understanding of science ?

SBC: I suspect there is still little sympathy for the predicaments that people with Asperger Syndrome end up in, and that the USA would not want to send out a message that if you hack into the Pentagon from outside the USA and get caught, you can avoid extradition if you have mitigating circumstances.

SBC: I think it is still important for scientists to communicate with the public, and my piece in the New Scientist was a message to journalists who report science to look more carefully at what they do.

Scope: How far can the ability to mindread (recognise emotions) be taught to children with autism?

Scope: Working in the field of cognitive science, what is your view with regard to the developing field of nootropia and cognitive enhancement? SBC: This field is not that new and of course we've all been drinking coffee for years at work, to aid our concentration! the key point with any drug is what its unwanted side effects are. Scope: You were the first to develop a test for synaesthesia; you plan any further research in this field? What could further research examine? SBC: It is true that in 1987 I published the first test for synaesthesia, which opened the doors to two decades of research into the condition. I also published the first brain scanning studies of synaesthesia, one of which was published in Nature Neuroscience in 2002. My most recent foray back into the field was to publish the first genetic study of synaesthesia (in 2009).

SBC: We don't know what the upper limits to such teaching could be, but at least we have demonstrated that such teaching does make a difference. Scope: You identified two regions in the brain related to autism, leading to the amygdala theory of autism; is there any scope for medication utilizing neurotransmitters in these regions or is autism more anatomically based? SBC: Medication may one day be useful therapeutically in autism, even if autism has a neuroanatomical basis. Scope: To what extent is Psychology a discipline in its own right? Unlike psychiatry it does not require a medical degree, would you see it as separate from the field of medical science? SBC: Psychology is today largely cognitive neuroscience. Psychiatry is distinct from it because it is also a profession that can legally prescribe medication for mental illness

Questions by Raj Dattani & Casey Swerner 23


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 24

Scope 2009/10 Review

An Interview With Michael Lexton Could you give us a brief outline of your career?

be imagination! You have to challenge them otherwise our pupils would be bored!

I graduated from Swansea University in 1969, and then proceeded to do a PhD in the same department on the topic of gas kinetics as well as the reactions of hydrogen atoms on alkenes in flow discharge tubes. Essentially I analysed the products and tried to figure out the rates of reactions in the processes. I decided that I didn’t really find a career in research an attractive proposition, and, having done a little bit of teaching I went to Cambridge to do a PGCE. Once I got into the practice of teaching I was hooked.

What are your reflections on the move to the Aske (the new science building)?

So it was love at first try then? Yes - once you start and you enjoy the classroom and you (like I) enjoy the subject you find that teaching that subject is something amazingly rewarding and pleasurable. Where did you go on from Cambridge? I started teaching at Trinity School in Croydon. In 1977 I applied for a job here (HABS) after four years at Trinity, the department (at HABS) was bigger and seemed to be more dynamic, and as you know I’ve been here ever since. I became the Head of Department (Chemistry) in 1982 and then the Head of Faculty (Science) in 1990. Did this progression in role make your job more enjoyable? Well I’ve never lost the enjoyment of the classroom and keeping up with my subject, but yes I do also enjoy the managerial side of my job. I find that there are interesting challenges and that there have been changes in the way the subject is taught - constant changes in fact. You have to keep up and there are concerns (as you see often in the media) about ‘dumbing down’ in examinations. I don’t think they have been dumbed down but regardless of what one thinks on that topic it doesn’t mean you have to dumb down the teaching! Are pupils stretched enough? Yes. You see if all you teach is for the examination then that’s all you will get out of it as a teacher. You can always try to push them a bit harder, set material which is a bit more challenging. Mostly the boys like this and really respond to the challenges. Indeed the boys here are no less bright than they were 30 years ago and no less ambitious. They are a bit politer I think, but that may just 24

I am very pleased with the outcome, it’s a terrific building. It’s a pleasure to teach in the labs. Teaching in the temporary building was surprisingly straight forward – the rooms were bigger and better than the old building! However we did face challenges in the temporary building. The difference could be seen in the transition to this new building – it was easier; not only because we could leave a lot of old equipment behind us, but also because we had learnt lessons during our time in the temporary building. Is the new building value for money? It was very expensive! It is a building with a design life of 80 years – compare that to the design life of the original 1961 buildings of this school - only 25 years. You can take a design life and extend that by double or more sometimes. So you’re looking at a building that will definitely be here in a 100 years time. It was of course expensive but you cannot plan for the future unless you are willing to invest for the future. It was a well planned move in terms of architecture, design and finance. Onto the curriculum – What was the reasoning behind the move IGCSE for the sciences? We had reached the stage five years ago with the old GCSE syllabus where we felt they didn’t offer enough of a challenge, they didn’t prepare for A level, and to be polite we were concerned with the validity of the coursework investigation. IGCSE seemed an obvious choice. The syllabus material suits our boys, with hard sciences which challenges and prepares. What do you think of the new Chemistry A level? Without a doubt the 2009 paper was difficult, although generally boys will tell you that the paper was harder than past papers. This year I agree that paper was indeed a tough paper. I think with examinations you have too break it up and look at it like this. There’s the syllabus which informs the content, the past papers which inform the style and the teaching which doesn’t necessarily have to marry those two exactly. You can teach it in a way which is appropriate as I explained earlier. The new specification is not very different and the new changes are sensible;

they’ve removed industrial chemistry – it is a very “GCSE” topic. They include new material such as entropy and provide a new approach – governed by the “How Science Works” concept - a promising teaching tool that also promises harder applied material. Difficult doesn’t equal bad! What do you think about science outside of the curriculum at HABS? There is a lot of beyond the curriculum; Science Society and Scope at the forefront. Scope is an absolutely outstanding science magazine. I have seen a lot of school science magazines and there is nothing of comparable quality. Mr Delpech has taken it onto a new level in terms of quality in both production and content. What’s laid on by the Science Society is amazing – we have had some very high quality speakers who travel long distances to speak to us and recently turnout has been considerably higher. [ In 2009/10 the Science Society presented The Lord Krebs, The Baroness O’Neill of Bengarve, Dr. Aubrey de Grey, Prof. Simon Baron-Cohen, David Bodanis and will be hosting Prof. David Nutt amongst others in the near future...] Elsewhere the Engineering Education Scheme, Junior Science Club, and the Junior Science Fair (which my successor as Head of Department, Dr Pyburn, pioneered) have all been marked successes. Add in the Science Olympiad extension classes in General Studies which offer an opportunity to go above and beyond with committed students. Boys have had a great deal of success – we have had students compete at international level and many others also gain medals nationally. I am really proud of the way the school offers these opportunities to pupils. Yes you have some pupils who will do the work and go home at 16.00 – you always will. But what’s special is that we also have some really committed students who really want to learn. What recommendations do you have for improving Scope? Taking off my Head of Faculty cap and putting on my Head of Department cap I would say chemistry is under represented normally. People do not appreciate the role of chemistry in advances in all areas of science – materials, plastics, and medicine are all heavily influenced – for example surgical advances have always followed from medical [chemical] advances. Scope has, in past issues, focused down on the medical side. I would like to see more physics too – particle


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 25

Scope 2009/10 Review

Dr M J Lexton Michael Lexton joined the Haberdashers in 1977. He was born in Cardiff, South Wales, and at age 11 he went to St Illtyd’s College, Cardiff where he took O-levels and then A-levels in Mathematics, Physics and Chemistry. In 1966 he entered University College Swansea reading Chemistry, graduating in 1969. He went on to take a PhD at the same institution, and then later a PGCE at Cambridge. His thesis was based on “The Reactions of Hydrogen Atoms with Simple Alkenes”, a subtle and complex aspect of gas kinetics and yet an area of chemistry that has real commercial value, as it is a reaction which is at the heart of the process that turns vegetable oil into margarine.

physics and astronomy for example. Do you think Chemistry’s role is more than just a ‘facilitator’ for the more applied aspects of the sciences, such as medicine? Chemistry has a bad press! This is a hobby horse of mine! If you pick up a newspaper and read the science section – you might read about physics –the LHC (which broke apart after 5 minutes !) or how medics are about to cure Cancer or Alzheimer’s – headlines we have been having for many years ! If you see chemistry however you’ll only read about things like toxic material leaks and the like. A very negative image really. I think that’s a great shame – it discourages people from going into chemistry – bright scientists might not feel its something they want to be involved in. It hinders the advances which will bring real progress in all sciences.

ago, and it described how in the 1930’s he was inspired to study physics at university. His teacher told him not to bother as there were only a “few loose ends left to tie up and that would be that [in physics]”. He was told he would be better off doing something else. Nothing can be further from the truth! Physics has been saying the above periodically – at the end of 19th century for example. Then comes along Einstein!

I think that is overrated personally! I really do! Boys here are very self confident, articulate and that can sometimes be interpreted as arrogance. Its not particularly true of the HABS boy, it is an illusion just like the illusion that some have that this is an exam factory; that we make boys cram. Yes there are high expectations, but remember that there is an awful lot going on here which has nothing to do with exams.

Could HABS do better in preparing students for science at university?

What will you miss about HABS?

What are the major barriers science still has to conquer?

The best of our students are well prepared to take advantage of the academic opportunities, but there are temptations at university! You find too many people who are only interested in getting a fairly good degree and getting a good job in the city and so aren’t interested in their academic specialism. As such people don’t get as much pleasure out of it as they should. These opportunities won’t ever come around again. People are too busy with the peripheral – clubs and societies. Yes one should do that by all means, and do go out and socialise – these are things you must and should do! But don’t loose sight of the academic opportunities to engage with some the best minds at the finest universities. Your learning capabilities are at their peak at university, and you never get that time again. Beware! I am not so sure the average HABS boy is fully aware of that.

That’s more difficult than it sounds. I read an obituary of some distinguished scientist (whose name I have forgotten!) some years

A reference to the often quoted criticism of the HABS boy: highly confident and able verbally?

How could chemistry get a better press? Chemists are at the forefront of materials science and a lot of the analytical progresses that have been made have been discovered by chemists promise all kinds of uses. Pharmaceuticals, medicine and drug design have all been transformed by progress in the chemical sciences.

I’ve always enjoyed coaching rugby – it’s a great way of getting out of the classroom, doing something that I love, a game I love. I enjoy dealing with boys in a different context – more than just a teacher. I don’t think I plan to miss my career - I’ve had a good career – 36 years! I feel I am ready to go. Do you have any immediate plans? No big plans – travelling is on hold for the moment. I also play a lot of chess – I play less than I used to – if I play and study a bit more I might improve! I have an awful lot of reading and music to catch up on! Is that a musical wild side? No, mainly classical - unless I point back to the Beatles perhaps and that era, I am afraid popular music has left me out!

Questions by Raj Dattani

25


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 26

Scope 2009/10 Review

Entropy & The Theory Of Evolution Johan Bastianpillai What are the Laws of Thermodynamics? The First Law of Thermodynamics states that “during any reaction the total energy in the universe remains constant.” The Second Law of Thermodynamics states that “during any reaction the total useful energy in the universe will decrease.” Using an example to illustrate this, consider a ball at the top of a hill. It has a ‘useful’ potential energy. As it rolls down the hill, this potential energy is being transferred into kinetic energy, so that at the bottom of the hill there is no more ‘useful’ potential energy to do any more work, as described by the second law. Inncreasing entropy The entropy in an isolated system can increase in two main ways, due to constraint change or temperature change. Entropy will increase if the amount of kinetic energy is constant, but energy is being distributed in more ways in the final state of a system after the particles have changed due to reaction (i.e. more higher number of products than reactants). However, one principle to note is that if constraint on particles, causing less freedom of motion, increases, entropy decreases. For example, if the number of particles decreases, then entropy would decrease as there would be less ways to distribute energy, or when volume decreases or even when particles change into a more organised phase (i.e. gas to liquid). However entropy will also increase if there is more kinetic energy, which can be distributed in more ways, even if the particles don’t change. Here, an increase in temperature (measure of average kinetic energy) would cause an increase in entropy. If we think of a system’s total entropy as ‘constraint-entropy’ + ‘temperature-entropy,’ an entropy change can be due to either changes or a mixture of both. Often, as we will see later, they can be conflicting as constraint may cause enthalpy to decrease, but temperature causing it to increase. Usually, the temperature-entropy is the dominating factor, so there is an increase in the overall entropy in the universe even though ‘disorder’ seems to decrease, as we can see below. Here are five examples from astronomical evolution of reactions that involved two particles (electron and proton) and three forces (electrostatic, strong nuclear and gravitational): A. An electron and proton are attracted due to electrostatic force and eventually (700,000 years after the Big Bang) the temperature 26

cools enough for them to remain together to form a Hydrogen atom. B. In space, H2 molecules are pulled toward each other by gravitational forces, so they gain kinetic energy as they gain speed and thus they increase in temperature. When this temperature is high enough, H2 molecules are separated into their constituent protons and electrons, in a reversal of reaction A. C. As gravity continues compression, the temperature rises, eventually causing protons to collide with such force that the strong nuclear force overcomes electrostatic repulsion between protons and pulls them together. This starts a chain of powerful nuclear reactions which convert four protons into a helium nucleus, creating a star. D. Later in some stars lives, a series of nuclear reactions create heavier elements such as lithium, carbon, nitrogen, oxygen and iron, which form plants and our bodies. Some stars become larger supernovas containing these heavier atoms. E. When a supernova explodes it releases ‘heavy atoms’ into space where gravitational forces condense the atoms to form planets in the solar system. What happened and why? For the five reactions, let’s look at the changes in entropy caused by changes in temperature and constraint: Changes due to constraint – In A, the number of particles decreases as four particles becomes two (2e- + 2p becomes 2H) and then two becomes one (H + H becomes H2). Constraint increases and this causes a decrease in entropy. In B and E, gravity causes constraints which again decrease entropy. The same can be said for C and D as

W h a t i s en t r o p y ?

the strong nuclear force converts many small particles into few large ones, decreasing entropy. Changes due to temperature – Entropy increases in each reaction as they are pulled together by a force, increasing their kinetic energy and temperature. Change of universe-entropy: In each reaction, a small entropy decrease (constraint) is overcome by a large entropy increase (temperature) which causes an overall increase of entropy in the universe, consistent with the Second Law. Change of system-entropy: In an open system, entropy could decrease as in terms of temperature change, heat energy is moving out of the system into the universe, causing a decrease in system-entropy. An increase in constraint on the system would again mean a decrease, so there is an overall decrease in system entropy, but importantly, there is still an increase of entropy for the universe system + surroundings), which is described by the Second Law. Change of apparent disorder: In each reaction, particles become more constrained and ordered from molecules into planets and solar systems. This is a period of astronomical evolution, showing a decrease in ‘disorder’ due to simple attractive forces There are two important kinds of intuition here. There is everyday intuition which is based on the fact that entropy is disorder and will reach the wrong conclusions because in each reaction, the ‘disorder’ decreases, so entropy should by this reasoning, decrease, but we have seen that it increases. In contrast, thermodynamic intuition (based on a correct understanding of entropy, which you now have) leads to the correct answers and concludes that entropy increases in each reaction. Entropy and Evolution Now that we’ve sorted out that mess, let’s

On a microscopic level, entropy ishave a property that depends on a look at why young-earth creationists the number of ways that energy can be so distributed among the Henry become excited by thermodynamics. a creationist, explains his great particles in a system. Really, it isMorris, a measure of probability and discovery: “The most devastating and not disorder (a common misconception), because if energy can conclusive argument against evolution is the be distributed in more ways in a certain state, that state principle of entropythen (the Second Law of is Thermodynamics). principle implies more probable. For this reason, chemicals in a This system tend to that...evolution in the vertical sense the equilibrium state in which the (becoming greatestincreasingly numbercomplex) of ways of is distributing energy amongst the molecules can occur. completely impossible. The evolutionary model requires some universal principle

The Second Law is a description ofwhich probability, recognising increases order...however the that only naturalistic scientific which is known in every naturally occurring reaction, whatever isprinciple most probable to effect real changes in order is the Second is most likely to happen (all thingsLaw, considered). As probability is which describes a situation of universally related to entropy, to state the Second Laworder. in aThe more precise deteriorating law of entropy is a universal decreasing complexity, form: during any reaction, the entropy of law theof universe will whereas evolution is supposed to be a increase. universal law of increasing complexity.”


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 27

Scope 2009/10 Review

reaction. Entropy and Evolution Now that we’ve sorted out that mess, let’s have a look at why young-earth creationists become so excited by thermodynamics. Henry Morris, a creationist, explains his great discovery: “The most devastating and conclusive argument against evolution is the principle of entropy (the Second Law of Thermodynamics). This principle implies that...evolution in the vertical sense (becoming increasingly complex) is completely impossible. The evolutionary model requires some universal principle which increases order...however the only naturalistic scientific principle which is known to effect real changes in order is the Second Law, which describes a situation of universally deteriorating order. The law of entropy is a universal law of decreasing complexity, whereas evolution is supposed to be a universal law of increasing complexity.”In later statements, Henry Morris claims that ALL types of evolution are impossible, because “evolution requires some universal principle which increases order, causing random particles eventually to organise themselves into complex chemicals, non-living systems to become living cells (chemical evolution), and populations of worms to evolve into human societies (biological evolution). What information codes tell primeval random particles how to organise themselves into stars and planets? (astronomical evolution)”

Have creationists found a “devastating and conclusive argument against evolution”? To answer this, scientific details are essential, so we have to briefly look at two of the three very different types of evolution: astronomical and biological to understand whether Henry Morris is indeed making a plausible claim. Astronomical Evolution The reactions previously essentially is astronomical evolution, as the protons and electrons developed and changed into more complex atoms, molecules, then stars and planets and eventually solar systems. This ‘evolution’ was brought about as the particles did what ‘came naturally to them’, which was to feel an attractive force and act upon them. The ordered complexity which resulted does not violate any principles of thermodynamics as, contrary to the claims of Morris, the Second Law is not a “universal law of decreasing complexity”. None of these reactions violates the Second Law, and neither does the overall process. On the small scale, yes entropy decreases as particles are compressed together, however in the bigger picture, the universe is expanding overall, decreasing constraints and increasing entropy. Movement of these particles increases entropy due to increases in temperature, so universe-entropy increases.

Biological Evolution An evolution of increasing biological complexity can occur while total entropy of the universe increases. Is the Second Law violated by either mutation or natural selection, which are the major actions in neoDarwinian evolution? No. And if an overall process of evolution is split into many steps involving mutation then natural selection, each step is permitted by the Second Law and so is the overall process. Random mutations which are beneficial can lead to more developed organisms, proving that natural evolution can produce increasingly complex organisms. Admittedly, we can ask scientifically interesting questions about complexity – how much can be produced, how quickly, by what mechanisms – but how does the Second Law fit as justification for the creationist argument? The truth: it doesn’t. Does entropy undermine the theory of evolution? If you’ve followed the argument and the points put across, you should by now see that Morris’ claims aren’t really backed up with the scientific evidence. His claims are saying that all types of evolution are impossible. As explained previously, these claims are generalised and rely on an everyday intuition and not thermodynamic intuition. His, is a misunderstanding of the Second Law and so really his argument was flawed from the outset. Does entropy undermine the theory of evolution? No. In short, we must take away a message from this. The conflict has been around for some time and when it was, people took it fairly seriously. In this case, a simple misinterpretation had lead to, albeit feeble, attempts to undermine a theory which has been the fundamental foundation of modern biology

27


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 28

Scope 2009/10 Review

An Interview With Aubrey de Grey Scope: Could you please explain on your theory of "The seven types of aging"? 
AdG: Well, first of all it's not exactly a theory. It's just a way of classifying the various molecular and cellular changes in the body that accumulate throughout life as sideeffects of normal metabolism and eventually contribute to age-related ill-health. I suppose it can be called a theory in some ways - first, there are a few changes that accumulate but that I claim do not contribute to age-related ill-health, and second, part of my claim is that there are no accumulating changes that do so contribute but that we haven't yet discovered. But the "seven-point plan" itself is just a classification. So: well, the categories are cell loss, cell accumulation through failure of cell death, cell accumulation through excessive cell division, mitochondrial mutations, intracellular molecular garbage, extracellular molecular garbage, and extracellular protein cross-linking Scope: Mitochondrial Aging and Cancerous Mutations are listed as 2 of the 7 types of aging...are these forms of aging simply entrenched in our process of cellular replication? If so would this not be simply impossible to avoid? 

 AdG: Oh, for sure these things are impossible to stop from happening - and actually, all the other five are also impossible to stop from happening. But SENS is not a plan for stopping these things from happening; it's a plan for stopping them from mattering. In most cases this is by repairing them after they've happened; in the two cases you mention it's by preventing them from causing any impairment of metabolism.

 Scope: Some of your research on immune aging has focused on interleukin 7 
 (IL-7); what is IL-7? Do you accept reports that "interleukin 7 (IL-7) can improve protective immunity have produced disappointing 
 results so far." EMBO reports 6, 11, 10061008 (2005)? 

 AdG: Yes I do. There are various problems with the techniques that have 
been tried so far, including the half-life of the injected IL-7 and its localisation to the thymus - but those problems are being addressed by various groups in new methods being developed currently. Also, IL-7-independent ways to regrow the thymus are being explored, including in a study that SENS Foundation is funding.

 Scope: You have claimed that ''mitochondria

28

with reduced respiratory function, due to a mutation affecting the respiratory chain, suffer less frequent lysosomal degradation'. Surely this poses a significant challenge; how would it be overcome?



prioritise what research is done using the funds that donors provide. We do indeed fund research in universities, but not just one university. And we also fund research inhouse at our own research centre.



AdG: It's very hard! In fact, I have neither heard about nor devised any way to do this. That's why I still favour the approach of restoring those mitochondria to normal respiratory function by introducing suitably modified mtDNA into the nucleus.

Scope: Have any public medical institutes backed the SENS project, or is it entirely privately funded? If so why or why not (have public medical institutes backed the SENS project)?



Scope: Caloric restriction has been proven to reliably extend lifespan in many different types of animals, and most notably mice. Do you believe this endeavor will have much value in expanding human lifespan? AdG:No. I believe it will have non-zero value, but far less than in shorter-lived species, because the selective pressure to respond to famine by altering one's metabolic priorities is much less when the outcome is a 20-year delay of aging than when it is only a one-year delay.

 Scope: In your opinion how would the concept of ‘immortality’ change how society functions? Would post-natal human developmental periods and life stages change? What about overpopulation? 
 AdG: I don't work on immortality. I work on stopping people from 
getting sicker as they get older, and I think I know how to do that so well that they will indeed live a lot longer but that's a side-
benefit, not the motivation. Development will not change at all, no. Other life stages will change only in the sense that there won't be a stage of decline. Overpopulation will be a threat, as it already is today, and we will address it by matching the birth rate to the death rate in whatever ways society may choose. 

 Scope: Assuming that the eradication of aging was possible would you envisage a societal paradigm shift to this or would it be more of an individual choice?
 AdG: I'm not really sure what you mean by a social paradigm shift. I do expect that everyone will want these therapies - I don't tend to meet people who want to get sick as they get older.

 Scope: What is the role of the SENS Foundation in terms of anti-aging research? Why not simply act through a University? 

 AdG: The value of having an independent foundation is that we can coordinate and

AdG: SENS's work is entirely funded by philanthropy so far, but our most promising research projects are now far enough along that we're in the process of applying for public funding to take them forward. Also, some projects essential to SENS are being abundantly funded independently of SENS.

 Scope: Ray Kurzweil and Eliezer Yudkowsky are working on the notion of a 'singularity', a notion where a hybrid biological organism/machine will ensure that human immortality. Do you think this will supersede the work of SENS?

 AdG: I don't know - and nor do they. That's why I strongly support their work and they support mine: we all feel that the best way to maximise our chances is to pursue all promising avenues as hard as possible, in parallel.

 Scope: How far down the line is SENS in terms of research for each of the seven types of ageing? 

 AdG: Some strands are very far advanced late-stage clinical trials. Those are the areas that the SENS foundation doesn't fund because we don't need to. Others are probably 6-8 years away from demonstration of proof of concept in mice.

 Scope: When will you expect, if research goes as planned the first big 'break-through' in your research?

 AdG: There are breakthroughs all the time, but the really decisive breakthrough will be when we implement all the SENS strands in the same mice well enough to give a twoyear (or so) postponement of aging to mice that are already in middle age when we start the therapy. I think that's probably now less than 10 years away.

 Scope: Do you expect the maximum age level to plateau, or that rather, scientific discoveries will ensure that it keeps ever increasing? How soon are we likely to see significant changes to life expectancy?




A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 29

Scope 2009/10 Review

AdG: Ever increasing, for sure and certain. When we achieve a few decades of postponement, the rate at which we're further improving the therapies will far outstrip the rate at which new problems are arriving. I think we have a 50% chance of getting to that cusp within 25 years. Scope: Some people might argue that in many cases rather than the inherent ageing of the human body being the overarching reason for death, a specific acute clinical condition such as a heart attack can be said to be the cause. How would you answer this? AdG: Acute conditions, and indeed chronic conditions that afflict the elderly, are merely aspects of the later stages of aging. They are age-related for that reason: otherwise, they'd affect young people just as much as the elderly. Thus, SENS can be viewed as preventative geriatrics.

Scope: Do you think that SENS' and your own attempts and to bring anti-ageing to forefront of the wider non-scientific community has brought even more criticism from those who do not accept it?

 AdG: Oh yes - but that's a good thing. Opposition is the last step before acceptance. The problem is when one's just being ignored or ridiculed, and I'm largely past that stage.

 Scope: Could you please explain what you mean by your 'engineering' approach to gerontology? AdG: It's really simple: it just says two things. First, that preventative maintenance is easier than curative maintenance, and second, that for a really complicated machine that we don't understand very well (like the human body) it's also easier than redesigning the machine altogether so that it needs less maintenance in the first place.

 Scope: Many people have criticised your work, what is your reaction? AdG: Relief. If you do science and no one is criticising you, you're almost certainly not making much of a difference.

 Scope: Do you think the wider scientific community has become more accepting of your approach since the publication of the EMBO Reports in November 2005 attacking your work?

 AdG: Without doubt. In fact, many of the authors of that article have become supportive. To a large extent they were already far less dismissive than the article indicated: there was a lot of politics involved.



Questions by Casey Swerner

Dr. A ub r ey d e Gr ey Dr. Aubrey de Grey is the Chief Science Officer of the SENS Foundation (Strategies for Engineered Negligible Senescence). His main research interest is in the elimination of the effects of cellular and molecular damage in aging. He firmly believes that “there is no differnece between saving lives and extending lives, because in both cases we are giving people the chance of more life” and also that the first person to live to 1000 years “may already be born”. He visited the School’s Science Society in January 2010. 29


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 30

Biological Sciences

30


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 31

Scope 2009/10 Biological Sciences

Abiogenesis: The Origin Of Life Nicholas Parker Throughout human history, there have been many people who have tried to answer the fundamental question of how life arose. Abiogenesis is a scientific theory that explains the origin of life from inanimate matter. At present, this theory concerns only life on Earth (as no life has yet been found elsewhere in the Universe). It shouldn’t be confused with evolution, which is the study of how groups of living things change over time.

proposed that life had once arisen spontaneously, beginning with self replicating molecules that evolved into cellular life. Around the same time, J. B. S. Haldane put forward a similar theory, providing the basic theoretical framework for abiogenesis. So what exactly is Abiogenesis? The biologist John Desmond Bernal, building on Oparin’s and Haldane’s ideas, suggested that there were a number of clearly defined "stages" that could be recognised in explaining the origin of life:

Origins of Abiogenesis

Some would argue that spontaneous generation was the first “theory” resembling abiogenesis. However, as spontaneous generation wasn’t a scientific theory - and was eventually disproven by Pasteur, in his experiments using broth - it is, in my opinion, too different to abiogenesis for it to be a precursor of it.

• •

The first hypothesis on the natural origin of life comes from Charles Darwin in a letter to Joseph Dalton Hooker dated February 1, 1871:

“It is often said that all the conditions for the first production of a living organism are now present, which could ever have been present. But if (and oh! what a big if!) we could conceive in some warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity are present, that a proteine compound was chemically formed ready to undergo still more complex changes, at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed.” For the next 50 years, no notable research or theory appeared. Then in 1924 Alexander Oparin, in his book “The Origin of Life”,

1: The origin of biological monomers 2: The origin of biological polymers 3: The evolution from molecules to cell

These stages provide an excellent framework for explaining abiogenesis as they brake down the large theory into smaller, more digestible steps. I will outline the most interesting one (“Genes/RNA first” model). Other models can broadly be divided into two groups: “Proteins first” and “Metabolism first”, though there are a few models which stubbornly refuse to fit in either. Stage 1: The origin of biological monomers Evidence suggests that the atmosphere of the early Earth was composed primarily of methane, ammonia, water, hydrogen sulfide, carbon dioxide or carbon monoxide, and phosphate, creating a reducing atmosphere. In 1924, Alexander Oparin and J. B. S. Haldane independently hypothesised the origin of life from a “primordial soup” under such a reducing (oxygen-free) atmosphere. Experimental verification of their hypothesis had to wait until 1952 when Stanley Miller and Harold Urey at the University of Chicago carried out the famous Miller-Urey experiment. The experiment used water, methane, ammonia, and hydrogen, which were sealed in a sterilized loop. After just one

week of continuous operation, organic compounds made up between one sixth of the carbon within the system, with 2% forming amino acids. In 2008, a re-analysis of Miller's archived solutions from the original experiments showed 22 amino acids - rather than 5 - were actually created in one of the apparatus used. Sugars, lipids, and some of the building blocks for nucleic acids (nucleotides) were also formed in the experiment. Other experiments conducted in the second half of the 20th Century also produced biological monomers, notably the formation of adenine from a solution of hydrogen cyanide and ammonia in water by Juan Oró in 1961. This was grounbreaking. It showed that in the right conditions, biological monomers can spontaneously arise from simple molecules present in the early Earth’s atmosphere, confirming the hypothesis of Haldane. Organic molecules could also have come from meteor impacts, as they are found quite often asteroids. This alternate theory regarding the origin of life is known as Panspermia, and asserts that life on Earth was seeded from space. The theory was first proposed by Benoît de Maillet in 1743. Nucleotides (the building blocks of nucleic acids such as RNA and DNA) are formed of an organic base, a pentose sugar and a phosphate group . Phosphates were present in the early atmosphere, and we have already seen that sugars and nucleotides can spontaneously form under the right conditions. Though the precise mechanism of nucleotide formation is at present unknown, it is likely to be similar to the method living organisms today use to manufacture nucleotides. Other, simpler nucleotides have been found to exist and could have formed nucleic acids that were the precursor to RNA and DNA.

(article continues over ...)

Figure 1 Nucleotides (the building blocks of nucleic acids such as RNA and DNA) are formed of an organic base, a pentose sugar and a phosphate group. Diagram shows the structure of 5 nucleotides, pentose sugar andphosphate groups. 31


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 32

Scope 2009/10 Biological Sciences

Stage 2: The origin of biological polymers We now have our monomers necessary for building polymers: amino acids and nucleotides. The next step is to explain how these simple monomers polymerise into long polymers such as Proteins, RNA and DNA. Nucleotides first joined together to form shorter chains of polynucleotides. Then, polynucleotides joined together to form longer nucleic acid chains. Researchers in the 1980s found that a clay called Montmorillonite, abundant on the early Earth’s sea floor, acted as a catalyst for the formation of polynucleotides and nucleic acid chains such as RNA. Some other nucleotides, such as Phosphoramidate DNA are capable of spontaneous polymerisation in solution, forming new Phosphoramidate DNA templates and extending existing templates. Free nucleotides can then base pair with a single stranded template and can self ligate. In other words, the sense strand causes the antisense strand to form spontaneously. Montmorillonite, as well as being a good catalyst for the formation of amino acids from simpler molecules, is also a good catalyst in the formation of polypeptides and longer protein chains. Stage 3: The evolution from molecules to cells The transition from molecules to cells is where the theories tend to vary the most. I will discuss a variation of the “Genes/RNA first” model. RNA molecules can self replicate; this has been demonstrated under laboratory conditions. However, this replication is imperfect as mutations can creep in during replication such deletion, which change the RNA molecule. This variation between the molecules mean some are better suited to their environment than others, allowing natural selection to take place and evolution to occur. Apart from the obvious example that some molecules could replicate faster than others due to variation, another advantageous mutation would be the ability for RNA molecules to attract lipid molecules. As we have seen from the Miller-Urey experiment, lipids were formed spontaneously under prebiotic conditions. Lipid molecules have both a hydrophilic (water attracting) and hydrophobic (water repelling) end. As the hydrophobic ends are repelled by water, they clump together to form micelles, with the hydrophilic ends pointing out towards the water. An RNA molecule that attracts the hydrophobic ends of these lipids towards itself would be better protected from physical damage by the external environment than those without, and so would have a better change of survival and replication. These micelles with RNA inside could be termed primitive cells (see footnote 2) as they have 32

genetic material which can replicate itself and a lipid membrane to separate the inside of the cell from the outside, two basic requirements of any cell. A better cell membrane would be a vesicle, where the inside and the outside of the cell contain an aqueous solution. In Montmorillonite clay, lipid vesicles have been found to spontaneously form. These lipid vesicles made from simple fatty acids are permeable to small molecules (like nucleotides), so any RNA inside could replicate itself by incorporating nucleotides that diffuse into the cell. When a vesicle grows, it adopts a tubular branched shape, which is easily divided by mechanical forces (e.g. rocks, currents, waves), and during cell division, none of the contents are lost. The cell grows by incorporating free lipid molecules into the molecules; “eating” is driven by thermodynamics. The cell reproduces by copying the RNA inside, and then physical forces breaking the lipid vesicle into separate vesicles, each with RNA inside. So we now have our basic cell with a lipid membrane and genetic information. However, any chemical reactions inside the cell will be slow as there are no catalysts inside. Or are there? RNA itself can not only store information like DNA, but can also act as enzymes (called ribozymes). Any cell which contained ribozymes would have an advantage over other cells and so would be selected for as the ribozymes could enhance replication, synthesise lipids and keep them integrated within the cell membrane; all of which would provide an evolutionary advantage over other cells. Ribozymes can also catalyse the formation of peptide bonds; the bonds that hold proteins together. This means that proteins could have been formed, which themselves could have acted as catalysts for reactions. As proteins are better more versatile and specific than ribozymes, they are better catalysts and any cell that could synthesise proteins would also have an evolutionary advantage. With proteins formed and self-replicating RNA already present, a mutation could cause the RNA in a cell to become dependant on proteins for replication (as proteins would reproduce the RNA more accurately than if the RNA self replicated) but still give the cell an evolutionary advantage. Another mutation could have caused the protein that synthesised the RNA base Uracil to synthesise Thymine instead (this is because the only difference between Uracil and Thymine is that Thymine has a methyl side group on one of its rings instead of a hydrogen atom in Uracil, as shown in the nucleotide diagram above). A final mutation could cause the protein to bond the two strands of RNA (old and new) to form DNA. Any cell that could regulate its division would therefore be selected for as it could reproduce faster and more precisely.

Voila! We have a self-replicating cell with a lipid cell membrane, proteins and DNA that can synthesise all its constituent parts from simpler molecules by the input of energy. The “Iron-Sulphur” world theory suggests that early life metabolized metal sulphides (such as iron sulphide lipid) for energy, but any reaction that produced energy which used readily available reactants could have occurred. These cells, while much simpler than even today’s simplest living organism (excluding viruses, which are classically non-living), were the beginning of the ancestral line which has diverged and evolved into every living thing we see on Earth today including you.

Footnotes 1. This evolution follows the quasispecies model, which is a description of the evolution of selfreplicating entities (including viruses and molecules such as DNA and RNA) within the framework of physical chemistry. The rate of mutation between generations is higher than in conventional species models of evolution. 2. Primitive cells considered to be the precursors to DNA-based prokaryotic life are called “protobionts”


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 33

Scope 2009/10 Biological Sciences

Evolution Of The Nervous System Aadarsh Gautam Perhaps the most pertinent piece of knowledge required to accurately describe the intricate construction and development of the Homo sapiens is the understanding of the evolution of the nervous system and specifically the spinal cord, as well as the notochord and the primitive axial skeleton around which the vertebrate develops. The sub-phylum ‘Vertebrata’ constitutes a mere 5% of all animal species further indicating that vertebrates (hence Homo-sapiens) must trace their routes from invertebrates. The evolution of the vertebrate from the invertebrate is a seemingly gargantuan leap. This evolution has been found to come from the neural crest. It is important to explain now what the neural crest is. Despite the myriad of obstructions in attempting to define the neural crest it is, roughly speaking, a series of cell “clumps”¬¬ along the margin of the forming spinal cord that give rise to spinal ganglia and other neural cells in the process of embryogenesis (the process by which an embryo is formed and develops) in vertebrates. The induction of the neural crest begins with the interaction between a layer of pluripotent stem cells (which go on to form the sensory/nervous system) and the neural plate (a thick, flat bundle of outer protective cells – the ectoderm). Subsequently, neural crest cells form within the border regions between germ cells and the neural plate, which rise as neural folds, converging to form the dorsal midline of the neural tube, and it is from here that the neural crest cells will emerge in formation to give rise to the eventual spinal cord. These cells migrate into the periphery. During and after this migration process, a series of cell differentiations occur which form many of the essential “building blocks” of vertebrates for example bone and dentine. These derivatives illustrate clearly then the importance of the neural crest, as it is responsible for the development of fundamental constituents of vertebrate physiology. Having described the importance of the role of the neural crest, the origins of this feature must be explored by tracing back to invertebrates. The inherent intricacy of the neural crest gives rise to a lack of conclusive development theories. A possible explanation to the rise of the neural crest is that vertebrate neural crest evolved from primary sensory neurons in prior invertebrates. This hypothesis was more fully developed in the theory that neural crest evolved from RohonBeard cells, a class of primary sensory

neurons that occur in the spinal cord of lowerchordates. It was then suggested that glia in the dorsal root ganglia present in the spinal cord also evolved from Rohon-Beard cells that had broken apart from spinal cord sensory neurons and subsequently migrated. Under this scenario, peripheral Rohon-Beard cells would divide to form the sensory neurons and glia of the dorsal root ganglia. This seems to be a specious explanation as neurons of the central nervous system have not been observed to ‘de-differentiate’ and re-differentiate as would be required in this manner. Evidence is present support this idea as in Zebrafish (Danio rerio) embryos, (in which both Rohon-Beard cells and neural crest cells are present) they appear to be part of a group of types of cells which originate from a common precursor. Alternatively, the hypothesis could be affirmed by instead theorising that evolutionary origins of the neural crest and the sensory neurons and glia of the dorsal root ganglia are to be found in the migration of mitotically active Rohon-Beard progenitor cells (which would then be capable of differentiating into a specific type of cell). The multitude of theories makes it difficult to identify a single holistic explanation of the origins of the neural crest. Perhaps the greatest hindrance in attempting to ascertain neural crest origin from experimental data is that it is almost impossible to experiment upon! Instead neural crest genetic identifications on a molecular level serve as a way to recognise the characteristics of neural crest, enabling the exploration of its invertebrate origins. This is being done by firstly identifying key molecular features necessary to produce the neural crest in pre-natal development. Two groups of invertebrate chordates, the tunicates and the cephalochordates were used to test this as they, for many years now, have been accepted as the closest living invertebrate relatives of the vertebrates. In comparing the two data sets, “neural crest genes” are being looked for, i.e. genes underpinning neural crest induction, differentiation and migration, and, in doing so the genetic construct of the neural crest can be identified. Such findings as to the construct of basic fundamental components of vertebrates are invaluable in better understanding the make-up of human physiology.

however, the role of groups of genes in the organisation of the spinal cord and the central nervous system, (or rather, parts related to the neural crest) that are general to Bilateria as a whole, as opposed to vertebrate specific.While it appears that neural crest cells with the ability to generate neurons and eventually the parts of the central nervous system could have evolved prior to the emergence of the vertebrates (i.e. such processes could occur in invertebrates), the production of such derivatives is a different solely vertebrate function. From here there is scope to identify the evolution of vertebrate function from the neural crest. Once again however this science is frustrating, as its complexity and seemingly unverifiable nature, makes drawing conclusions extremely difficult. However it has once been said that “the only thing interesting about vertebrates, is the neural crest.” Although perhaps an exaggeration the maxim highlights the importance of the neural crest in understanding vertebrate roots in their invertebrate counterparts. The neural crest’s evolution remains unclear and even its function is still being explored. The origin of neural crest is a key question, but the later evolution has received little scientific focus and may provide a key insight into what separates vertebrates and invertebrates at the embryological stage. However despite this scientific fog, what is clear is that vertebrate evolution is inseparably linked with the neural crest and a door to great discoveries in evolutionary terms of the origin of Homo sapiens. Despite what our Latin name may suggest, in biology at least, we are just begging to know how little we knowledge we may actually have.

Once again however the complexity of this investigation means there are, in reality, few, if any, definitive ‘neural crest genes’ that may be used in isolation as molecular signatures of the neural crest and so attempts to find a genetic origin of the neural crest among invertebrate chordates, characterised by the expression of a 'neural crest gene' have been vague. The data gained did show, 33


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 34

Scope 2009/10 Biological Sciences

Alzheimer’s: Hope At Last? Raj Dattani Alzheimer’s disease is one of the furthest reaching and most highly debilitating diseases of the 21st century. It is estimated that there are over 26 million sufferers world-wide, experts believe this number may increase as much as four-fold by the year 2050. Alzheimer’s is a degenerative and irreversible brain disease. It is most common in those aged 60 and above and causes dementia, which is defined by the Oxford Medical Dictionary as ‘a chronic or persistent disorder of behaviour and higher intellectual function’. Amyloid plaques, deposits of a sticky protein, amyloid beta peptide appear in specific brain regions along with neurofibrillary tangles, abnormally twisted forms of the protein tau.

cognitive decline of Alzheimer’s patients, for the NMDA receptor is located in a synapse. This is in line with Hebbian theory, if neurones in the synapse die, as a result of NMDA over stimulation, then memory recall would decrease.

by a method known as uncompetitive channel blocking. For the receptor to work the ionic channel must be open to allow calcium ions to pass through. Memantine simply blocks this channel by binding to sites within it, this means that Ca ions cannot cause

Biological Processes Currently the most widely used hypothesis is known as the cholinergic hypothesis. This theory is based on the fact that in Alzheimer’s sufferers the synthesis of the neurotransmitter acetylcholine decreases. Acetylcholine plays a vital role in memory recall; this is due to its effects on synaptic plasticity through increasing synaptic potential. Synaptic plasticity is the strength of the synapse, and its ability to change in strength. As a memory is formed (a process known as long term potentiation), the electrical potential, (i.e. synaptic potential, of the cell immediately across from the synapse), affects synaptic plasticity. Therefore if levels of acetylcholine are decreased, then synaptic potential, and so synaptic plasticity are reduced. This reduction in the function of the synapse causes lessened memory recall in line with Hebbian theory. This theory proposes that memory is represented in the brain by various networks of interconnected synapses, and so if these are lessened in function then memory recall is decreased. Scientists therefore believe that the cognitive decrease in memory could be due to the fact that the synthesis of a crucial neurotransmitter in memory recall is decreased. The second neurobiological phenomenon that is evident in those suffering Alzheimer’s is that NMDA receptor, a receptor for the neurotransmitter glutamate, is over stimulated. This means that very high levels of both NMDA and glutamate bind to this receptor. As a result high levels of calcium ions enter the cell. This calcium influx into cells activates a number of enzymes, including phospholipases, endonucleases, and the protease calpain. These enzymes then damage cell structures such as the cytoskeleton, membrane, and DNA, leading to neurone death. This phenomenon is known as excitotoxixity, and it leads to the shrinkage of a sufferer’s brain. Scientists believe that this neurone death contributes significantly to the 34

Figure 1 An overviewof the histology, structure and physiology of a synapse (generic).

Biological Solutions

excitotoxicity, and so solves the problem.

The solution to the problem shown through the cholinergic hypothesis is very simple. Patients are given a drug known as a cholinesterase inhibitor- cholinesterase is responsible for the break down of acetylcholine (ACh). If it is inhibited then the neurotransmitter remains intact meaning that the cholinergic system at the synapse works much better. These inhibitors work by either being a competitive inhibitor or a noncompetitive inhibitor. In the former the inhibitor binds to the same active site as the substrate (ACh), so that the ACh cannot bind with the enzyme, and as a result ACh is not broken down. In the latter the inhibitor binds to a secondary site close to the active site. Whilst the substrate can still bind to the active site the presence of an inhibitor causes a change in structure, meaning the enzyme cannot bind fully to the substrate on the active site and thus this slows the rate of the catalysis by the enzyme, effectively inhibiting the break down of the substrate, acetylcholine. Approved cholinesterase inhibitors are donepezil rivastigmine, and galantamine, a competitive inhibitor.

Aternative Solutions

To combat the problem of excitotoxicity a drug known as a NMDA receptor antagonist is utilised. The only approved antagonist for Alzheimer’s is memantine. Memantine works

The tau hypothesis asserts that the formation of neurofibrillary tangles inside the nerve cells, causing the neurone’s transport system to disintegrate and eventually leading to cell death, is the major factor. Otherwise, investigating neuroprotective measures could yield a successful solution The amyloid hypothesis proposes that the deposits of amyloid protein are the fundamental cause of the disease and that their formation is linked to the APP gene on chromosome 21. However a trial vaccination in humans, against this mutant gene, whilst clearing the amyloid plaques, did not show cognitive benefits . Recent research however suggests that ‘remodelling of crucial Alzheimer disease-related genes... could provide a new therapeutic strategy.’ In conclusion we can see that research into this problem is ongoing, The different views taken here boil down to which data is given higher validity, and ultimately only further studies will be able to differentiate between these.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 35

Scope 2009/10 Biological Sciences

Cocaine Dependence: Implications And Treatments Ravi Sanghani Consumption of all psychoactive drugs involves risks. It is in our general knowledge that cocaine poses much higher risks than prescription medicines and impacts society in a number of ways, justifying its illegalisation. Severe anxiety and depression manifest from the abuse of cocaine, jeopardising mental health, plus usage being a risk factor for cardiovascular diseases. In the first 60 minutes after cocaine use, ‘the risk of myocardial infarction onset was elevated 23.7 times over baseline (95% CI 8.5 to 66.3)’. It is clear that cocaine use produces physiological changes within the user and is a strongly addictive stimulant. Addiction is synonymous with criminal behaviour to seek cocaine. Rather than analyze this aspect, I feel obliged to comment on the environmental effects of the cocaine trade. This stance is regularly overlooked in the media but an understanding of the environmental (as well as social) effects of cocaine adds more weight to the arguements to try and endeavour to eliminate cociane dependancy. The chemicals used in the extraction of the cocaine alkaloid from the leaf are kerosene, sulphuric acid and ammonia which are immediately released into the heart of the rainforest. With no means of treating these chemicals or a removal system, pollution is abundant. Furthermore, because extraction requires a nearby water source aquatic wildlife is threatened. The effect is toxicity from nitrates and death for many creatures, plus jeopardising healthy offspring. At low levels (<0.1 mg/litre NH3) ammonia acts a strong irritant, especially to fish gills causing hyperplasia . This is when the lamellae epithelium thickens, increasing the number of cells in the gills. Consequently, water flow is restricted allowing parasites to accumulate. Continuous extraction requires higher ammonia levels (>0.1 mg/litre NH3) which exacerbates hyperplasia and causes irreversible organ damage. Liver necrosis or abnormal cell growth (tumour) are typical

reactions. This is a significant issue to local populations who fish in these waters and this is only a small side of the vast damage to species richness. Rising Cocaine Use IIn Western countries, which primarily report the abuse of cocaine, issues are being treated on several fronts. Drugs education programs with particular reach to adolescents, international law and the alleviation of poverty are utilised as means of reducing cocaine use. However, the real problem is the lack of medical treatments available to treat those already dependant on cocaine. Presently, drugs such as betablockers and ACE inhibitors can help to decrease hypertension protecting the heart; “yet there remains a genuine lack of pharmacological drug treatments available to treat cocaine abuse or prevent its relapses’’. The problem of addiction stems from neurochemical events in the brain. When cocaine enters the brain it disrupts the process of neurotransmitter endocytosis, by forming a chemical complex with the dopamine transporter (DAT). The flood of these chemicals prolonged in the synapse, before metabolism, produces intense euphoria and can lead to addictive habits. Whilst short term use temporarily raises synaptic DA levels, producing the ‘high’, DA receptors are depleted (‘down-regulated’) by chronic use. Recent studies confirm neurobiological changes from prolonged cocaine exposure can have knock on effects on whole other brain systems . There is a need for encompassing solutions to account for this and an ethical call for the provision of as much professional help as possible. Medical treatments ? One approach is through using a group of molecules collectively known as kappa (k) agonists. In the same way that DA has specialized dopamine receptors, the k opioid receptor is involved in synaptic DA levels. One type of opioid receptor, known as the kappa (k) receptor has, in recent medical studies,

been shown to inhibit dopamine release from mesolimbic neurons and attenuate the rewarding effects of cocaine. This is a significant step to lessen cocaine cravings and cut down on abuse. Studies with lab rats has shown that k receptors are involved in the antagonism of drug-seeking behaviour, with k agonists “altering levels of the dopamine transporter; decreased cocaineinduced dopamine levels and blocking cocaine-induced place preference”. Enrichment of the DA system is the major direction towards controlling addiction and building a situation to then eliminate relapse. K agonists lead to decreased selfadministration of cocaine which provides a large window of opportunity for the addict to safely wean themselves off cocaine and dissociate from psychological cravings. Administration with rats of kappa-agonist U69,593 (0.32 mg/kg, subcutaneous) for 5 days decreased DAT and D2 receptor densities, which remained after the 3 day period’ hence the decreased DA uptake . These findings tell us that when addicts administer cocaine in short successive intervals with daily U-69, 59s the effects are quickly attenuated, maintaining this blockage without damaging the DA system. This solution can overpower the most addictive properties of cocaine at the molecular level. Brain scans of ‘addicted’ users have shown lowered levels of prolactin. This is a hormone secreted by the pituitary gland that is involved in learning and axon myelination. Activation of K receptors triggers a release of prolactin, which may be beneficial to neuronal repair since myelination is fundamental to proficient impulse conduction. Nalmefene displays a strong ability to release prolactin, visibly above placebo level indicating effectiveness amongst a controlled group. This drug is already being investigated for alcohol dependence, “With a number of advantages…, including no dose-dependent association with toxic effects to the liver, greater oral bioavailability and more competitive binding with kappa opioid

(cont...)

Table 1: Relevant neuro-chemicals involved in synaptic processes

Name of Neurotransmitter

Abbreviation of neurotransmitter name

Dopamine

DA

Serotonin

5-HT

Norepinephrine

NE

receptors.” Doses of 20-25mg are best Respective Transpsupported orter Protewith in little Abbrdifference eviation of in traprolactin nsporter release with an 80mg+ range, which is of economic sense to manufactures. Dopamine transporter DAT Kappa agonists do not entail addictive properties making them safe to be taken over Serotonin transporter SERT long periods. Methadone for example often Norepinephrine transporter NET merely substitutes for heroin addiction in a minority. Research must reveal whether tolerance to k agonists rapidly builds which 35


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Scope 2009/10 Biological Sciences

receptors.” Doses of 20-25mg are best supported with little difference in prolactin release with an 80mg+ range, which is of economic sense to manufactures. Kappa agonists do not entail addictive properties making them safe to be taken over long periods. Methadone for example often merely substitutes for heroin addiction in a minority. Research must reveal whether tolerance to k agonists rapidly builds which would mean abandoning this treatment. K agonists must be safe for treating pregnant addicts and not cause birth defects nor must they be carcinogenic. In monkeys, k agonists have also produced side effects such as sedation and emesis. A mixture of k agonists is speculated to lessen side effects though this needs to be explored and cost-effective ratios determined. GVG Another medical solution is the anti-epileptic drug, GVG. An inhibitory neurotransmitter called GABA reduces the amount of DA in the forebrain which is heavily involved in addiction. This bears similarity to k agonists and is a direct interaction between reducing the extreme releases of DA from cocaine and improving the expression of non-drug taking behaviour. The BNL research team found that gamma vinyl-GABA (GVG) increases the amount of GABA available to inhibit dopamine and “enables better communication among brain cells”. Overall, a quicker return to high cognitive performance and memory could be derived from enhanced communication, which is not unlike the benefits from increased prolactin levels. It is unambiguous that GVG reduces cocaine’s release of DA by 300% / their dopamine levels increase to no more than twice normal levels. The reward response has been blocked and fails to pleasure the addict. Experimental Evidence? In a double-blind, placebo-controlled study 20 subjects who wished to break their addiction were given GVG daily. Eight managed to reach 28 days of abstinence and 4 patients continued use; albeit in smaller amounts. Early reports discussed visual disturbances by GVG, though anecdotal evidence from this study found no problems. Those 8 “showed profound behavioural gains in self-esteem, family relationships, and work activities”, conveying psychological control over dependence. Studies conclude that GVG has a very reliable safety profile, having been FDA approved after 20 years of clinical trials with lab rats and humans. Like k agonists, GVG displays the ability rebalance dopamine levels by ironing out surges and reduce “relapse resulting from addiction-induced cues”. This is an immense step to defeating the neurochemical basis of addiction plus enables the addict to live in familiar surroundings but not fall into relapse because of external signals. General information suggests that the anti-addictive effects persist longer from low doses of 0-150 mg/kg per day rather than a single large dose of 150-450 mg/kg GVG. 36

Both solutions, if approved, would be breakthrough medical treatments to treat cocaine abuse and cut relapse radically improving the life of addicts. Promising results from GVG trials indicate it may become a treatment for addiction to a range of other drugs like nicotine and amphetamine. As I have previously mentioned, I uphold the view that research institutions investigating these compounds must remain committed to their cause to ensure that a helping hand can be offered to addicts.

Page 36


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 37

Scope 2009/10 Biological Sciences

The Role Of Pharmacogenetics In Modern Medicine Casey Swerner Individuals respond differently to drugs and sometimes the effects are unpredictable at best, life threatening at worst. Differences in DNA that alter the expression of proteins that are targeted by drugs can contribute significantly to an individual’s drug response. One of the most thoroughly investigated stories is that of Mercaptopurine, used as a chemotherapeutic drug for leukaemia. Acute Lymphoblastic Leukaemia (ALL) is a malignant disorder in which blast cells constitute >20% of bone marrow cells. Acute Lymphoblastic Leukaemia is a cancer; whereby white lymphocyte cells rapidly divide without maturing which leaves people open to infection. Furthermore, the cancer can become metastatic, causing the spread of cancer to other parts of the body. Whilst only 700 people are diagnosed each year in the UK, it is the most common form of leukaemia in children. Mercaptopurine (6-MP) works by competing with the purine base guanine for the enzyme HGPRTase, inhibiting the synthesis of purines (2 of the 4 base pairs in DNA) and therefore metabolising into DNA. This makes new DNA ineffective and leads to cell death. It is not known exactly how 6-MP causes cell death, but its cytotoxic properties are very important. 1 step forward, 2 steps back? Any potential medicine must be tested very thoroughly as it may be very successful at combating a disease, but may be so toxic that more damage is done than good. Pharmacologists look at this issue by aiming to determine the efficacy of a drug relative to its toxicity (how much damage it can cause to a patient). This is important for chemotherapeutic drugs, because for the most part, they aim to destroy cells, and thus if not targeting solely cancer cells, serious adverse drug reactions (ADRs) can occur. In some patients the side effects may not be serious, but in others, side effects such as myleosuppression may occur, (reducing bone marrow activity) making patients’ susceptibility to death from infection or metastatic cancer even greater; defeating the point of treatment in the first place. Rates of a complete cure for 80% in children and 40% in adults highlight the efficacy of 6-MP; however there are multiple ADRs which are a serious concern. Genetics holds the key? One way in which doctors are beginning to test patients who receive 6-MP is by comparing the genetic makeup of those who have adverse reactions and those who do not. This study of investigating the efficacy of

drugs and genetic variation is known as pharmacogenetics. To do this first of all one has to consider how 6-MP works. As a prodrug, it undergoes reactions in the body before carrying out its function. In the body an enzyme TPMT (thiopurine Smethyltransferase) causes the breakdown of thiopurines (of which 6-MP is one) into substances that are not cytotoxic. Enzymes are coded for by DNA, and thus one has to consider whether variations on the DNA sequence of the TPMT gene on chromosome 6 affect TPMT output. Mutations can be tested for by a method known as High-Resolution-Melting analysis (HRM). Most commonly a single variation, called a Single Nucleotide Polymorphism (SNP) is tested for. A DNA sample is amplified by Polymerase Chain Reaction which replicates the area of DNA required. Then during HRM the area tested, is heated from 50C to 95C at which point the two DNA strands split. A fluorescent dye, which fluoresces brightly only when attached to the double helix, is attached and the fluorescence is measured as it reduces due to DNA break up. If the tested DNA has a SNP then the fluorescence/temperature curve is different, caused by the different bases present. The success of the method is dependent on having records of a nonmutated DNA strand, and highly sensitive recording probes to determine the level of fluorescence as temperature increases.

only 2 TPMT deficient patients were tested does not show wholly valid results as to the link between genetics and ADRs, with such a small sample size. This criticism has been cited by Van Aken et al who state ‘‘pharmacogenetic testing can help in avoiding some, but by far not all adverse effects of drug therapy’’. However, a study by Jones et al showed that TPMT variants cause different levels of metabolites (active part of 6-MP), meaning that mutant allele patients had a 6x higher concentration of the active drug present, potentially causing fatal side affects. Furthermore, a study published in the Journal of Clinical Oncology corroborated the fact that tolerable dosage for any TPMT deficient patient was required to be much lower to avoid toxicity. It’s finding that ‘‘the majority of these (TPMT deficient) patients (21 of 23, 91.3%) presented with hematopoietic toxicity’’ also adds weight to the notion that genetic factors cause a predisposition to severe ADRs. In a test of ALL patients it was found that ‘‘therapy with 6% dosage yielded...concentrations (6-MP) not associated with prohibitively toxic effects’’ and that when '‘dosage was adjusted, these patients were generally able to tolerate full doses of their other chemotherapy.’’It also important to note that this study found that there was ‘‘100% concordance between

Figure 1 The image shows the graphs for generic curves of polymorphism(s) and wild type allele, allowing scientists to observe if a mutation is present (by comparing to known wild type curves). To ensure that TPMT levels have a strong to link adverse reactions, numerous studies have occurred examining the levels of TPMT and patient reaction to 6-MP. For example the study by Reiling et al shows that the frequency of incidents of ADRs in patients with leukaemia being treated with 6-MP were 100% for deficient TPMT patients and only 7.8% for patients who have fully functioning TPMT alleles. Although this study contained 180 patients, and lasted a 2.5 year period, the fact that

TPMT genotype and phenotype in the TPMTdeficient patients’’. Genome based testing While pharmacogenetic genetic testing may seem like a perfect solution to reducing adverse reactions, there are alternative views on the issue. Some feel that ‘‘the science has not advanced much beyond a fishing expedition...to identify important combinations of genetic determinants of drug response’and there is a feeling that one gene one drug is 37


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 38

Scope 2009/10 Biological Sciences

Figure 2 The graph shows the required dosage reduction of 6-MP for non-toxic results, for wild type (control), heterozygous mutant, and homozygous mutant. not relevant for the majority of therapies. The term 'pharmacogenomics', which describes a polygenic or genome-wide approach to identifying genetic determinants of drug response, is being cited as the next step for genetic drug response testing. Once a gene is linked to a drug response, largescale epidemiological studies and animal models of the candidate gene polymorphisms can be used to further establish genetic variability as a factor on drug response. Ultimately this could potentially fuel the pharmaceutical industry to specialize in genome based medicines, allowing for a greater diversity of drugs. This is shown in the case of HER2+ breast cancer where looking at the genetics of the cancerous tissue enabled a new drug, Herceptin, to be created which was previously rejected and today treats 7% of all breast cancers. The products of over expressed genes in cancer cells represent plausible targets for inhibitors that could also reverse a drug-resistance phenotype. Why is pharmacogenomics so rarely used in clinical practice? The range of mechanisms that account for genetic polymorphisms (SNPs, insertions/deletions, splice variants) means that results for even a single gene are very difficult to obtain. Ultimately, testing for pharmacogenetic polymorphisms must overcome major hurdle of cataloguing genedrug relationships. 38

One approach is to obtain genomic DNA from patients entered on Phase III clinical trials, and then to determine genetic polymorphisms that may dispose a small subset to severe toxicities. These trials should incorporate rigorous pharmacogenomic studies, coupled with retrospective animal models that reinforce genotype-phenotype clinical relationships. The idea is that those who present with severe toxicity could be identified based on genotype. This would build up a databank of drug-gene relationships and potentially also save an efficacious drug that would have previously been rejected due to adverse reactions. The main risk of pharmacogenetic testing is not the testing itself, but the results of geneprofiling populations. The pharmaceutical industry may become heavily driven towards providing drugs to those that satisfy the majority of patients, leaving those who would have an adverse reaction to conventional medicine either alienated, or having to pay premium prices for drugs which are not as lucrative for drug companies to produce. Pharmacogenetic testing may reduce pharmaceutical costs from ‘‘$500 million to $200 million’’ but as the report from the Nuffield Bio-ethics council states ‘‘developers of new medicines might seek instead to maximize the number of patients who would benefit from a medicine by using pharmacogenetic information to identify medicines most suited to large groups of patients’’. Pharmacogenomics has the

potential to translate some of the derivable knowledge of the human genome variability into better therapy. Whether this is a realistic outcome of the near future remains to be determined.

Definitions Wild Type – The most commonly occurring allele pair (non-mutant). Heterozygous – Person has 1 mutant allele and one (nonmutant) wild-type allele. Homozygous Mutant – Person has 2 copies of the mutant allele. Pharmacogenetics - the study or clinical testing of genetic variation that gives rise to differing responses to drugs in patients. Adverse Drug Reaction (ADR) anunexpected or dangerous reaction to a drug The study of an unwanted effect caused by the administration of a drug is called Pharmacovigilance.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 39

Scope 2009/10 Biological Sciences

Cancer Therapy: Treatment And Treatment Realities Bhavesh Gopal 1 in 9 women develop breast cancer during their lifetime and among these women, just fewer than twelve thousand die every year. The NHS accounts for this low mortality rate on innovative new treatments and their breast screening programme costing over seventy million pounds annually. Breast screening is a method of detecting breast cancer at an early stage. The first stage involves an x-ray of each breast (mammogram) which can detect small changes in breast tissue which could otherwise not be detected by touch. Women may be undergoing gruelling cancer treatment needlessly because as many as one in three cases of breast cancer detected by screening prove harmless. The British Medical Journal claims that twenty three of the leading health specialists criticised the Government’s “unethical” failure to provide women with full facts about the NHS screening programme. They said that many women have breast cancer diagnosed even if they have benign tumours and may undergo unnecessary surgery, radiotherapy or chemotherapy as a result. “As it is not possible to distinguish between lethal and harmless cancers, all detected cancers are treated. Over-diagnosis and over-treatment are therefore inevitable.” The stage of breast cancer is can be determined by the TNM system:

− T describes the size of the tumour

− N describes whether the cancer has spread to the lymph nodes −M describes whether the cancer has spread to another part of the body, such as the bone, liver or the lungs – this is called metastatic. The problem with breast cancer is its ability to carry out metastasis (spread to other parts of the body) which has made it difficult to treat ergo many women have died from breast cancer. To begin the process of metastasis, a malignant cell must first break away from the cancerous tumour. In normal tissue, cells adhere both to one another and to a mesh of protein (extra-cellular matrix) filling the space between them. For a malignant cell to separate, it must break away from the extra-cellular matrix and the cells around it. Cells are held together with cell-to-cell adhesion molecules and this adhesion also allows interactions between numerous proteins on the cell surface. In cancer cells, the adhesion molecules seem to be missing. Cadherins, a family of intercellular adhesion protein molecules, play a big part in keeping cells together. One subtype in this family, E-cadherin, is the adhesion molecule found in all cells. This molecule seems to be

the important factor in cell-cell adhesion. In cancer cells, E-cadherin is missing, allowing cancer cells to detach from each other. One study has shown that blocking E-cadherin in cancer cells turns them from non-invasive to invasive. This work established the importance of cell adhesion. These studies revealed cell adhesion's ability to inhibit a cancer cell's capacity to invade, by keeping it bound to other cells. Tumours are capable of creating new blood vessels (angiogenesis) because of their need for nutrition, and this gives cancer cells ample opportunity for transport. Entry to the blood vessel requires penetration of the basement membrane (thin layer of specialized extra-cellular matrix). A new location has been found where the cancer cells can grow. The thermodynamic model of cell interactions Malcolm Steinberg proposed the differential adhesion hypothesis, which is a model that explains patterns of cell sorting based on thermodynamic principles. Steinberg proposed that cells interact so as to form an aggregate with the smallest interracial free energy; cells rearrange themselves into the most thermodynamically stable pattern. If cell types A and B have different strengths of adhesion, and if the strength of A-A connections is greater than the strength of AB or B-B connections, sorting will occur, with the A cells becoming central. After a mutation has occurred in the nucleus, one can propose that the cell adhesions become weak ergo they can break away and metastasise. Chemotherapy Cancer treatments haven’t yet pervaded the UK in the NHS. Chemotherapy is the most relevant cancer treatment if cancer cells have metastasized. Cytotoxic drugs are given to patients via the bloodstream as cancer cells may have carried out metastasis; it is therefore administered via an intravenous drip. One chemotherapy drug used for testicular cancer treatment is cisplatin (a platinum compound). Patients receive cisplatin via a saline solution so it can enter the bloodstream ergo cells can take it up actively or by simple diffusion. Hydrolysis enables cisplatin to replace a chlorine molecule with a water molecule making it positively charged. Cisplatin attaches to the seventh nitrogen atoms of the adenine and guanine bases which causes the formation of cisplatin-DNA adducts (when two molecules react by addition forming two chemical bonds). 1, 2-intrastrand adducts form when the seventh nitrogen atoms (N7) on adenine and guanine are replaced by ‘two chlorine atoms of cisplatin’ (below). These adducts cause the bases to become de-stacked and the DNA helix to become kinked.

The 1,2-intrastrand adducts affect replication and transcription of DNA, and the ability to repair leading to an abnormality in the double helix. This is detected and triggers apoptosis (programmed cell death) in cancerous cells, halting metastasis. Chemotherapy is only effective if the cancer-to-normal cell death ratio is high. If the decline of normal cells is greater than expected, chemotherapy is damaging more normal cells than expected; consequently more side effects will occur. If a tumour has been removed by surgery, chemotherapy can decrease the risk of breast cancer returning again; this is known as adjuvant chemotherapy. Neo-adjuvant therapy is effective in shrinking down a large tumour so it can be removed easily during surgery – the use of chemotherapy in these ways has made it easier to treat breast cancer.

Figure 1 The chemical structure of cisplatin. Herceptin and 21st Century Therapy The HER receptors are proteins that are embedded in the cell membrane and communicate molecular signals from outside the cell to the inside of the cell. The HER proteins regulate cell growth, survival, adhesion, migration, and differentiation—functions that are usually amplified in cancer cells. The HER2 receptor is defective and is continually ‘on’ ergo it causes breast cells to reproduce uncontrollably enabling metastasis. Antibodies are molecules from the immune system that bind selectively to different proteins. Herceptin is an antibody that binds selectively to the HER2 protein. When it binds to defective HER2 proteins, the HER2 protein no longer causes cells in the breast to reproduce uncontrollably. HER2 passes through the cell membrane, and sends signals from outside the cell to the inside. Signaling compounds called mitogens arrive at the cell membrane, and bind to the outside part of HER2. HER2 is activated, and sends a signal to the inside of the cell. The signal passes through different biochemical pathways. In breast cancer, HER2 sends signals without being stimulated by mitogens first which promote invasion, survival and 39


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 40

Scope 2009/10 Biological Sciences

extra funding or suggestions of which services to cut, medical professionals ultimately have to make difficult decisions. Herceptin as an adjuvant therapy costs four times as much as other adjuvant therapies ergo some patients are given a greater priority than others. The NHS has used the ‘postcode lottery’ and some areas will not have availability to the certain drugs. The National Institute for Health and Clinical Excellence only issues “guidance” to the 152 PCTs in England and Wales. Therefore if one lived in Birmingham, the vast array of cytotoxic cocktails would be available however if one lived in Oxfordshire, this choice would be limited. Primary care trusts are responsible for spending more than 80% of the total NHS budget. It is possible, but difficult, to challenge a PCT decision on drug funding — the patient has to demonstrate that he or she is an “exceptional case”, but the criteria for “exceptionality” varies from PCT to PCT.

Figure 2 Herceptin’s main mechanism of action on cancerous cells. growth of blood vessels of cells. When cells divide normally, they go through a mitosis cycle, with checkpoint proteins that keep cell division under control. In breast cancer, the proteins (CDKs) that control this cycle are inhibited by other proteins. One protein is the inhibitor p27Kip1 which moves to the nucleus to keep the cycle under control. In cells with defective HER2, p27Kip1 doesn't move to the nucleus, but accumulates in the cytoplasm instead – this is caused by phosphorylation by Akt (proteins that play a role in cell signaling). Cells treated with Herceptin undergo arrest during the G1 phase of the cell cycle so there is a reduction in proliferation. P27Kip1 is then not phosphorylated and is able to enter the nucleus and inhibit cdk2 activity, causing cell cycle arrest. Also, Herceptin suppresses angiogenesis by induction of antiangiogenic factors. Targeting Cancer- Future Therapies? Non-competitive inhibition is a type of enzyme inhibition that reduces the maximum rate of a chemical reaction without changing the apparent binding affinity of the catalyst for the substrate. An inhibitor always binds to the enzyme at a site other than the enzyme's active site. This affects the rate of the reaction catalyzed by the enzyme because the presence of the inhibitor causes a change in the structure and shape of the enzyme which ultimately means the enzymes is no longer able to bind with a substrate correctly. Normal cells typically show low levels of folate receptors and cancer cells have evolved a mechanism to capture folate more effectively. One idea has been based on using amino acid oxidases & proteases which are used for digestion in snake venom. ATPases could also be used which are used for breaking down ATP to disrupt energy fuel use. A folate molecule could be used as an inhibitor to any one of these proteins. Now that the function of these proteins has been inhibited, they can 40

be injected in the blood stream where they are taken up by cancer cells. When the folate molecule binds to the folate receptor the bonds between the folate molecule and the receptor are stronger than that between the folate molecule and its protein ergo the folate molecule will be released from the protein. This allows the protein to affect the cell membrane and thus the micro fibrils needed in the cell cycle will be released preventing growth. However, this is a heuristic view of potential treatments, and much further investigation is needed. NHS Reality: The cost of Living Health economists from the University of Sheffield say Herceptin costs about £20,000 per woman for a year's treatment at the standard dose. Findings published in The Lancet show that Herceptin improves survival after three years by 2.7 per cent - described as an "extremely uncommon" result after such a short time by Professor Ian Smith. The Sheffield University team who advised NICE on its original decision to approve Herceptin says they are now having second thoughts. Writing in The Lancet, a study in Finland has shown that giving a fifth of the standard Herceptin dose, over a shorter time (nine weeks in this case) has equally good results in reducing the recurrence of cancer and deaths in the treated women. If confirmed the finding suggests women could be treated with Herceptin at a cost of two thousand pounds instead of twenty thousand pounds, reducing pressure on NHS budgets and allowing more women to be treated. The Norfolk and Norwich Hospital have found that the cost of Herceptin every year would be 1.9 million pounds for seventy-five patients. This eventually becomes 2.3 million pounds with pathology testing, cardiac monitoring, pharmacy preparation, and drug administration costs. As NICE guidance over drugs provides no

Some distress has been suffered by critically ill patients who have had their NHS funding withdrawn because they chose to supplement their treatment by paying for “top-up” drugs not available on the NHS. Ultimately, it is these economic realities which often have the final say despite major therapeutic improvements.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 41

Scope 2009/10 Biological Sciences

The Application Of Nanotechnology In Medicine Karthigan Mayooranathan Nanotechnology is technology that involves the manipulation of matter on an atomic scale and its materials are measured in nanometers. Nanometers, 10-9m, are so small (the width of a human hair is around 100,000 nanometers) that not even electron microscopes enable them to be seen – instead it relies on powerful atomic force microscopes. The advancement of nanotechnology has led to the formation of the sub speciality called Nanomedicine. Although nanomedicine may appear as if it is still on the drawing board, products have been made but they need to jump the essential hurdles of clinical trials. But how can nanomedicine contribute to and improve the medical field? Most drugs prescribed to a patient will have to be ingested orally; so this means that the drug will reach the target cells/tissues but as a consequence will have to travel around the whole body, via the systemic circulation (thereby giving the drug a lower bioavailability). Nanomedicine can increase the bioavailability of the drug which means that administered doses, and hence side affects, will be lower – thereby improving treatment of medical conditions. One way of delivering drugs involves using carbon nanotubes. A team of UK researchers not only found that the cell membrane could be penetrated by the nanotubes, but it could penetrate the membrane and enter the cytoplasm; without causing damage or death to the cell. Other methods of drug delivery do not deliver the drug into the cell; for example, liposomes carry the drug to the cell but do not enter the cytoplasm of the cell. Thereby the drug still has hurdles that it needs to overcome in order to enter the cell. Properties of nanomaterials enable it to be used as a much more effective way of treating cancer. Carbon nanotubules can be placed into the cancerous cells and this can cause the destruction of these cells when these cells are exposed to infrared light. Infrared light usually passes through the body harmlessly but when carbon nanotubules, in solution, are exposed to an infrared emitting laser the tubules in solution heat up to around 70 0C in two minutes. Therefore if thousands of these nanotubules enter into cancerous cells then the cancer cells will be incinerated -thereby destroying the cancer cells. To ensure that that the nanotubules only enter the cancerous cells,

they must be covered in folate molecules as the surface of cancer cells has a very high number of receptors for folate molecules whereas healthy normal cells due not contain as much of these receptors comparatively. This therefore ensures that the nanotubules are much less likely to attach healthy cells, as they are more likely to enter the cancerous cells. Folic acid is needed by all cells but particularly in high quantities for cancer cells, as they are essential to cells that rapidly divide and multiply . This process efficiently destroys cancer cells while minimising significantly the destruction to normal cells and tissues surrounding the site of a tumour. The method of using nanotubules and infrared light could also solve the problem of destroying cancers that are resistant to chemotherapy. However further tests and research needs to be done before this method can be implemented. Another potential way of treating cancer, by taking advantage of the unusually high number of folate receptors, is by using dendrimers. Dendrimers are man made molecules that have a diameter of 5 nanometers. Folic acid can be attached to hooks on dendrimers to carry anti-cancer drugs like methotrexate. This “Trojan horse” method allows chemotherapeutic drugs to be smuggled into the cancerous cells. Folic acid is essential in DNA replication as they are used in the synthesis of DNA bases – therefore cancer cells have a very high number of receptors.The folic acid attaches to cancer cells as opposed to rapidly dividing normal cells as cancer cells have evolved a mechanism to acquire folate more effectively. When the folic acid attaches to the receptors (pictured right), the cancerous cells internalises the dendrimer – unknowingly taking in the anti-cancer drugs. This therefore allows the cancerous cells to be destroyed effectively by the drug, as the drug crosses the cancer cells membranes with ease. This method of using dendrimers reduces the dose needed for cancer drugs. The lower doses prescribed means that less damage is down to the surrounding normal cells and tissues – thereby reducing the side effects that could be caused by a drug. Dendrimers are also advantageous, as they do not trigger a response from the immune system, as they are so small. Dendrimers also provide a solution to the resistance of cancer cells to drugs used in chemotherapy as the drugs do not need to diffuse across the cell membranes but instead are internalised by the cancer calls due to the folic acid the dendrimers carry. Nanotechnology can also be used in the early diagnosis of certain diseases. For example,

nanocantilevers can be created specifically to bind to molecules that are associated with cancer and cancerous cells, like altered DNA sequences or certain proteins associated with a particular type of cancer. If the cantilever successfully binds with molecules that are specific to certain types of cancer then the surface tension of the cantilever changes causing the cantilever to bend. This therefore allows for cancer to be successfully diagnosed at a much earlier stage, which means that diseases are more likely to be detected at much less advanced stage. Finally, nanoparticles can also aid in the medical imaging of diseases. Quantum dots of cadmium selenide (or other semiconductor compounds) are crystals that fluoresce in many different colours when exposed to ultraviolet light. The various colours that they fluoresce depend on the size of the crystal. These quantum dots can then be engineered to bind with DNA or molecules that are unique to particular diseases. When the crystals fluoresce under ultraviolet light, the DNA sequence will be made visible and the diseased cells will become detected and its location known. Cadmium Selenide quantum dots can also be useful in surgical procedures; when exposed to ultraviolet light they will fluoresce and will therefore enable surgeons to use crystals created to bid to tumour cells enabling them to more accurately locate and hence remove the tumour. Gold nanoshells can be engineered to bind with the receptors on certain types of cells. This means that the nanoshells can be used as effective biomarkers, as gold is very unreactive, and therefore will not cause disturbances to biological reactions in the human body. Similar to quantum dots, gold particles fluoresce when exposed to ultraviolet light and changing the thickness of the gold shell can change the colour emitted. Both these methods of using quantum shells and nanoshells and provide a much better quality of in vivo imaging and hence preventing the need for a biopsy. Gold nanoshells can also be used to destroy tumours in a similar way that carbon nanotubes do. Once large numbers of gold nanoshells accumulate at the site of the tumour and heat is applied – then the heat is absorbed by the spheres causing the death to the surrounding tissue . Overall, nanomedicine is a prospering research field that has made many medical breakthroughs. The advantages and benefits of nanomedicine strongly outweigh the pitfalls but until a better idea of whether this type of technology will be compatible in the human body needs more research and clinical trials. 41


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 42

Scope 2009/10 Biological Sciences

Heart Transplantations: End of an Era? Brett Bernstein From the first ever transplant in 1905 (a transplant of the cornea, performed by Edward Zirm), to the first heart transplant, overseen by Christiaan Barnard in 1967, to the first transplantation of a windpipe produced from the patient’s own stem cells in 2008, the prominence of organ transplantation has most definitely grown dramatically over the past century. However, what this article will seek to find out is whether heart transplantation specifically has reached its peak, and is to start, or, indeed, continue waning in its usage and popularity. Heart transplantation in this article will refer to “heart allografts,” defined as the transfer of a heart to a patient from a “genetically nonidentical member of the same species”. Rejection The largest problem associated with allografts is rejection, defined as “an attempt by the immune system to reject or destroy what it recognizes to be a "foreign" presence ”. Recipients and donors are matched based on their blood type, and the similarity of their Human Leukocyte Antigen system (HLA). This refers to many antigens on the surface of cells, as well as other proteins involved in immunity, and is coded for by a “superlocus” (long length of DNA) on human chromosome 6. Although, there is great diversity in HLA, it must be matched as closely as possible, because “any cell displaying some other HLA type is "non-self" and is an invader, resulting in the rejection of the tissue bearing those cells”. Often, B-cell antibodies begin to attack the organ only minutes after transplantation, “and the new organ may turn black and blotchy even before surgeons have sewn up the wound” Although patients are normally allowed to leave hospital after one week, at roughly this time, when T-cells (a type of lymphocyte) use their receptor to bind to a foreign Major Histocompatibility Complex (coded for by the HLA complex), most of them become cytotoxic, and attack the cells of the transplanted heart, causing them to lyse (burst) and undergo necrosis (unplanned cell death). This is known as acute rejection. Necrosis is particularly harmful because, in the process of cell death, damage to the lysosomes can cause digestive enzymes to be released, which digest surrounding cells, causing more necrosis, resulting in a chain reaction. “If a sufficient amount of [touching] tissue necrotises, it is termed gangrene”. In addition, other T-cells produce cytokines, such as interferon gamma, which attract macrophages and neutrophils to the transplanted tissue by way of chemotaxis (following a chemical gradient). Once there, these phagocytes begin the process of 42

phagocytosis (engulfing foreign / infected cells) on the transplanted cells, causing even more cell death. In the myogenic tissue of the heart, this is particularly serious, because blood pressure and heart rate can quickly drop due to the lack of myogenic tissue able to depolarise and contract, and a myocardial infarction may result. The problem of the immune response is normally solved by the usage of immunosuppressive drugs. These include corticosteroids such as hydrocortisone, which weaken the immune response, and calcineurin inhibitors, such as cyclosporin, which inhibit the action of killer T-cells, both resulting in reduced damage to the transplanted tissue. As such, the benefits of such immunosuppressants are manifest: that the transplanted tissue is not rejected by the body. However, there are also risks involved in the prescription of these immunosuppressants. For example, the weakening of the immune response, caused by corticosteroids, consequentially causes the weakening of any beneficiary immune responses, such as fighting other infections. Moreover, cyclosporin is chemotherapeutic and nephrotoxic (poisonous to the kidney), and has been known to have very serious sideeffects. Other side effects include gums growing over the teeth, and growth of hair all over the body. Ethically, a doctor must evaluate the potential efficacy of a drug against its possible toxicity before its prescription, and must also review whether the chance of contracting a more serious disease from the post-operative treatment outweighs the negative effects caused by the patient’s current condition. Moreover, the cost of anti-rejection treatment is roughly £10,000 per annum, and so, economically, it must also be assessed as to whether it would be more economical to invest a greater amount of money in the prevention of the causes of heart transplants, than pay for post-operative treatment. Alternatively, a recent discovery of a protein involved in heart transplant rejection by a team from the University of California may also provide the basis for less toxic antirejection treatment . Researchers have proven on mice that the use of an antibody which blocks the NK2GD protein on the surface of killer T-cells lessens the immune response, and makes the animals more receptive to a transplanted heart. Although early in its development, such a discovery could greatly benefit humans in the years to come. Interventional Cardiology The main cause for heart transplantation is severe heart failure One of the main causes of heart failure is coronary heart disease, a narrowing of the coronary arteries caused by atheroscelerotic build-up. Since 1977, when

the first pecutaneous transluminal coronary angioplasty (PTCA) was performed by Andreas Gruentzig, the procedure has given hope to those with coronary heart disease, preventing many potential heart attacks, and the need for transplantation. The procedure begins with the administering of a local anaesthetic to an area of the skin either in the groin or arm. Subsequently, “a catheter (a fine, flexible hollow tube) with a small inflatable balloon at its tip ” is inserted through either the femoral or radial artery as appropriate, and is guided towards the narrowed coronary artery with the aid of a coronary angiogram. The balloon is surrounded with a stainless steel mesh called a stent, and, when inflated with a pressure 75-500 times normal blood pressure , the stent expands and squashes the atherosclerotic plaque, widening the lumen of the blood vessel. When the balloon is deflated and withdrawn, the stent maintains its shape. Originally, all stents were “bare metal”, that is, uncoated stainless steel. However, a clinical trial in 2002 showed that a drugeluting stent (in this case, sirolimus), resulted in lower rates of major adverse cardiac events (MACE). Two different types of drug-eluting stent are available in the United Kingdom, those coated with sirolimus and those coated with paclitaxel. Sirolimus is an immunosuppressant, and, as such, blocks the activation of T and B cells, so the lumen of the coronary artery remains clear, with no build-up of unwanted lymphocytes. Paclitaxel works as a mitotic inhibitor, breaking down the microtubules used in cell division, and is used as an anti-proliferative agent, to limit the growth of neointima (scar tissue) around the stent. Socio-Economic Consequences Presently, it is evaluated on a patient-bypatient basis as to whether it is economically and ethically better to use a drug-eluting stent, which costs more, or a bare-metal one, which needs a longer drug treatment afterwards. Ethically, since drug-eluting stents are more likely to decrease the probability of restenosis, ideally, they should always be used in preference to those consisting of bare metal. However, “the Cypher drug-eluting stent is being marketed in the UK at a price about fivefold that of a bare stent”, and so economically, the benefit of the reduction in risk of restenosis must be compared to the increase in immediate cost, and in addition, compared to the cost of after-care resulting from the usage of a bare-metal stent. Guidelines issued by the National Institute for Clinical Excellence (NICE) say that drugeluting stents should be used in particularly narrow coronary arteries, as these are the ones most susceptible to restenosis. Antiplatelet drugs such as aspirin or clopidogrel, and anti-coagulants such as heparin, will also be prescribed for up to six months for drug-


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 43

Scope 2009/10 Biological Sciences

? eluting stents, and longer for bare-metal ones, to prevent clotting of blood around the stent. Statistics show that angioplasty has a slightly better one-year survival rate than transplantation: in the year 2006, there were 2,192 heart transplants performed in the USA. 74% of these procedures were performed on men, for whom the one-year survival rate was 87%. In the year 2005, there were 1,271,000 coronary angioplasties in the USA, with a one year survival rate of 83-95%, depending on age.

potential advances such as wide-spread availability of fully functioning artificial heartpumps, increased stem-cell research, and, the commonly used coronary bypass, all indicate that the heart transplant is likely to decrease in usage over the next decade. As even more angioplasties are performed in the coming years, in combination with the other alternative treatments discussed, heart transplants should decline in use, allowing money allocated for post-operative treatment to be used for research, benefitting the majority of the population in the long-term.

Figure 1 Graph showing number of heart transplants and incidence of transplants per million population, 1997â&#x20AC;&#x201C;2006 From 1997 â&#x20AC;&#x201C; 2006, in general, the number of heart transplants in the United States has decreased. This indicates that preventative or alternative measures to transplantation have increased in use and / or efficiency over this timespan. Socially, these measures benefit both the patient, in view of the fact that they undergoe a lower risk, less invasive procedure, and the hospital, as, economically, the cost of a PTCA and aftercare is less than that for a transplantation. With respect to the above statistics, although it would be unrealistic to suggest that all those who underwent angioplasties would have otherwise eventually had to undergo a transplant, it is likely that the use of PTCA to prevent any irreparable damage to coronary arteries has reduced the number of heart transplants required over the past thirty years. Moreover, as discussed, with the increased efficiency of anti-rejection treatment and the potential use of antibodies based on the recent discovery in California, transplantation is still a viable option for those whose condition has deteriorated to a level where transplantation is clinically necessary. Of course, beyond the scope of this article, 43


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 44

Scope 2009/10 Biological Sciences

Haemophilia A: How successful has the genetic engineering of recombinant Factor VIII been? Ronel Talker Haemophilia, is a hereditary blood condition, whereby, a vital clotting agent is deficient causing blood clots to develop far slower and the healing of wounds is much delayed. The Problem Although haemophilia A has a very good prognosis and is quite rare (1 in 10,000 births and 1 in 5,001 male births being that of a haemophiliac), about 16,200 people in USA suffer from Type A and 1,681 patients died in USA from haemophilia in 1999. Therefore, it is vital that a suitable treatment iis found in order to enable these sufferers to llead as normal a lifestyle as possible. However, in the past, the use of bloodextracted Factor VIII has resulted in other lifethreatening conditions arising. In essence, factor VIII levels in the bloodstream must be raised safely, in order to enhance clotting. To do this, scientists are researching different ways in which genetically engineered factor VIII can be used. The Role of Factor VIII Factor VIII , otherwise known as antihaemophilic globulin, is a glycoprotein that is predominantly manufactured in the liver and after synthesis, is transferred to the lumen of the endoplasmic reticulum. The Golgi apparatus then influences its compatibility with von Willebrand factor (vWBF), which is Factor VIII’s cofactor. Their association gives Factor VIII stability and protection against degradation as it circulates in the blood plasma. IIn the body there are 13 different clotting

functions. As soon as there is a disturbance iin the blood vessel wall, a complex coagulation cascade takes place involving almost all 13 factors. There are,two paths on which clotting take place. The extrinsic pathway begins in the broken tissue outside the blood vessels with FIII (tissue extract) activating the release of FVII in the bloodstream. The intrinsic pathway is more complicated and begins when FXII (Hageman factor) from the blood comes into contact with the collagen in the damaged vessel wall. A chain of activations involving FXI, FIX and FVIII then follow, culminating in the meeting of the intrinsic and extrinsic pathways. Together, these activate FX, which catalyses the conversion of FII (prothrombin) to thrombin. This also acts like a catalyst in converting FI (fibrinogen) to fibrin. Finally, with the addition of Factor XIII, this forms a blood clot. In these processes, each factor can be seen as a domino. Without one domino in the chain, the entire pattern falters. Similarly, without Factor VIII, a Type A haemophiliac’s blood cannot clot at all. A Possible Solution – Genetic Engineered Factor VIII In the 1970s, haemophiliacs were treated using Factor VIII derived straight from donated blood. This blood was not tested or screened before transfusion leading to an HIV/AIDS and Hepatitis C epidemic amongst patients being treated in this way. As a result, about 60% of haemophiliacs contracted HIV (although most did not have AIDS) and subsequently, half the population of haemophiliacs died. Since 1983 and 1991, people infected with HIV and Hepatitis C, respectively, have been excluded

from the donor list. However, in 1993, scientists found a way of genetically engineering human Factor VIII using Escherichia coli bacteria and therefore avoid many complications of blood-borne diseases. This process took advantage of the rings of DNA called plasmids found outside the nucleus in bacteria and how they can be transferred from one bacterium to another. Unfortunately, haematologists found a problem occurring with the first generation of recombinant Factor VIII. The original purpose of this solution was to prevent the risk of a spread of the Hepatitis C and Human Immunodeficiency Virus (HIV). The first genetically engineered samples of Factor VIII, manufactured in 1992 were very fragile and so the human protein albumin was used as a stabiliser. However, it was discovered that contaminated human albumin caused the patient to be infected with the transfusiontransmitted virus (TTV). So the next step in haemophilia research was to find a stabiliser that used less blood-based additives. In June 2003, further advances were made and it was found that the use of the sugar trehalose would result in no blood-based additives being used in the treatment of haemophilia A. Such a breakthrough meant that there was no risk of being infected by the contaminated blood products of the donor. Economic implications The first genetically engineered Factor VIII concentrates were available from 1993 and as this was a major step in safe and effective therapy for Haemophilia A, the drug was very expensive at the time. The primary reason for this expense, according to Dr. Mary Mathias, a haemophilia consultant at Great Ormond

Varieties of Haemophillia Haemophilia A - The most common type of haemophilia, presenting itself in 90% of haemophiliacs. A defective X-chromosome causes coagulation Factor VIII (F8) deficiency Haemophilia B – Another X-linked genetic disorder, which involves the lack of Factor IX. This is rarer but not as severe as Type A Haemophilia C – An autosomal recessive disease, which causes the blood to be deficient in Factor XI factors, which all have slightly different 44


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 45

Scope 2009/10 Biological Sciences

Manufacturing recombinant Factor VIII 1. A DNA strand containing instructions for producing Factor VIII is cut from human DNA using restriction endonucleases 2.

A plasmid from a bacterium is isolated and the same enzyme is used to cut the ring open

3. The DNA containing the new instructions are now spliced into the plasmid and DNA ligase used to join the ends, leaving a recombinant plasmid that can code for Factor VIII 4. This plasmid is now kept in an environment filled with cells taken from either an ovary or a kidney of a Chinese hamster until one of these cells ‘accepts’ the plasmid 5. The cell now reproduces several times until there are hundreds of these cells containing the recombinant plasmids, each of which produce Factor VIII along with their other products in a vat 6. The bi-products are all removed during heating and filtration. Antibodies can also be used to bind exclusively to the Factor VIII 7. The Factor VIII is now filtered once more to remove any remaining impurities before being stabilized with human albumin, bottled, freeze-dried and labelled Street Hospital Haemophilia Centre, is due to the processes involved in the isolation of plasmids and their insertion into the hamster’s ovary cell. However, as a patient being treated with the most recent versions of recombinant Factor VIII can achieve an almost normal life expectancy, doctors such as Dr. Mathias believe that the treatment is worth the expense. The overall cost of recombinant Factor VIII concentrates decreased considerably as cheaper techniques are being used to make these products. However, plasma-derived Factor VIII clotting agent, an alternative treatment to the genetically engineered coagulation protein, has increased in cost. In fact, it started much cheaper than recombinant Factor VIII but is now almost as expensive. Dr. Mathias concluded, “The main reasons behind the rise in plasma-derived

Factor VIII prices are partly due to the decrease in blood donors but mainly because of the extensive processes carried out to ensure that the products are virally clean, especially necessary after the HIV and Hepatitis C outbreaks.” Plasma-derived Factor VIII is described in more detail later on as an alternative method of treatment. Risks of Genetically Engineered Factor VIII The only risk to patients is that of developing Factor VIII inhibitors. These are antibodies produced by the body’s immune system, which almost entirely degrade the activity of the infused coagulation factor. There is a risk of developing inhibitors whilst receiving any form of haemophilia treatment because any foreign body inserted into a human runs a risk of being rejected and not being recognised. Nevertheless, recent research has suggested that the risk of gaining inhibitors may be slightly higher using genetically engineered Factor VIII rather than using plasma-derived clotting factor.

Alternative Solutions

Desmopressin (deamino-D-arginine vasopressin) This drug is a synthetic derivative of vasopressin, a hormone produced by the pituitary gland in order to help retain water in the body by varying the permeability of the collecting duct in the nephrons of the kidney. Vasopressin is also called anti-diuretic hormone (ADH). Desmopressin also happens to raise the levels of Factor VIII in the bloodstream but requires the patient to already be producing some Factor VIII. Therefore this alternative can only be used with mild haemophiliacs and in some von Willebrand sufferers (although it can have an adverse effect on the platelets in others).The only major side effect of this drug is water retention and so patients’ fluid intake must be restricted. It can be given by an intravenous or subcutaneous (under the skin) injection or also by intranasal administration.

Plasma-derived Factor VIII: These Factor VIII concentrates are produced by fractionation of pooled human plasma and then are subject to many methods of viral inactivation such as heating, pasteurisation, solvent/detergent treatment and monoclonal antibody purification. These steps of ultra filtration are vital to avoid a repeat of the HIV/Hepatitis C epidemic. The end product is then freeze-dried (lyophilised) so that the powder form is stable and therefore refrigeration isn’t necessary. Before injection, the powder is simply mixed with sterile water and therefore makes home treatment easy.

very effective. This is because there are some clotting diseases such as Von Willebrand disease where there is no recombinant coagulation agent as yet. Possible future issues and solutions There are worries that haemophilia treatment in the future may become extremely expensive because currently, medicine has enabled haemophiliacs to lead an almost normal lifestyle and therefore have children. As more haemophiliacs are able to produce offspring, more genes coding for Factor VIII deficiency will be spread down generations and so more people will end up being either carriers or sufferers of haemophila A. At present, there is pre-conception counselling and prenatal testing of the foetus’ blood Stem cell scientists, such as Dr. Douglas Melton and Shinya Yamanaka of Harvard and Kyoto University respectively, are making discoveries that may soon see Induced Pluripotent stems cells (iPS cells) with the genes causing the disease to be replaced with chemicals. These iPS-generated cells could then be transplanted into people and perhaps reprogramming may also be possible.. Unlike a stem cell that erases a mature cell’s entire genetic memory, Dr. Melton’s method takes a mature cell back only part of the way and simply gives it an extreme therapeutic change. Research still continues into this method but having considered the benefits and risks, genetically engineered Factor VIII remains the safest and most efficient method of treatment around.

This method is advantageous as it involves a high concentration of Factor VIII in a small space but suffers due to the necessity for many donors in order to compensate for the loss in activity of the clotting agent during the process. However, it is still important to keep plasma-products, even though recombinant Factor VIII can now be made to be safer and 45


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 46

Scope 2009/10 Biological Sciences

Swine Flu Epidemiology & Pathology Vishal Amin Earlier this year, a new strain of an influenza virus emerged in parts of Mexico, known as swine flu. The strain of the swine influenza A virus (SIV) is known as H1N1 and it was soon declared pandemic status. So what has led to this state of panic, with a virus that has similar symptoms to that of seasonal flu? Haemagglutinin and Neuraminidase The reason for their being so many different types of influenza virus is down to the above two proteins. In our bodies, haemagglutinin is responsible for the clumping of red blood cells which is released by antibodies. On the influenza virus, haemagglutinin allows the virus to bind with the cell that it is going to infect. Currently, there are 16 known forms of influenza haemagglutinin and are appropriately named H1, H2 etc. The forms which allow influenza to infect humans are H1, H2 and H3 although in some other instances, like H5N1 bird flu, H5 can sometimes infect our bodies. Neuraminidase is another viral protein which allows the virus to exit the cell once the cell has been infected. There are 9 known forms of this protein and can sometimes combine with haemagglutinin to form a single protein complex. This means that various strains of flu can arise through different proteins being present on the virus. All of various forms of the two proteins have been found present in avian influenza A and it is thought the origin for all cases of influenza A in other animals are down to avian flu. Antigenic Shift Antigenic Shift is where two different subtypes of the same virus, for example H2N2 and H3N2, combine together to form a new virus with the antigens of both previous subtypes, therefore possibly creating a more lethal virus, with the capabilities of both its predecessors. This can happen through a number of ways, either two different flu viruses, such as bird flu and human flu, infect the same cell, therefore transferring genes of different influenza strains to each other or a direct transfer of the virus, such as birds to humans without the genetic makeup of the flu being compromised. In this case, if the virus is not modified further, the virus would not be able to spread to another human and this form of infection is rare. The problem with swine flu is that these animals are susceptible to human, bird and swine flu viruses. This means that it is easy for a two variations of influenza to infect the same cell in a pig, causing new influenza viruses with different genotypes from their parents. The main problem concerning ourselves today with the current swine flu pandemic is that this virus is able to transfer from human-to- human, being a member of 46

the H1 subtype, as well as having characteristics and genes of swine flu, meaning we have little or no immunity to it. The Origins of the Current Flu As it is widely regarded, the current strain of flu originated from Mexico. Extensive tests have been taken on the virus to determine its origin and how it has managed to spread so easily. Through the analysis of the virus, it is believed that the virus was indeed derived through the combination of several viruses that we circulating in swine. Since 2007, there have been cases of H1N1 swine influenza in humans, due to a large amount of varying swine influenza viruses being present in the USA. It is currently believed that through antigenic shift, the current swine flu virus is thought to be a combination of two swine strains, one avian strain and one human strain of influenza viruses. This is found out using genetic analyses, which try to map the evolutionary path of an organism. How do viruses attack cells? As mentioned before, only H1, H2 and H3 virus infect the cells in the human body. But how do viruses like H1N1 infect and destroy cells in our body? On the influenza virus, the haemagglutinin allows the virus to bind to the membrane of a particular cell, as it is able to attach the virus to particular cell receptors on the cell surface membrane. In this instance the haemagglutinin binds to sialic acid receptors on the cell membrane. After the virus has bound to the cell, it is moved into the cell via a process called endocytosis, where a small vacuole is created around the virus so that it is able to enter the cell. Once inside the cell, the viruses protein coat allows several proteins to escape, most importantly, enzymes and vRNA (viral RNA), and move towards the nucleus. Once inside the nucleus, the viral enzyme polymerase copies the virus’s genetic material and forces the host cell to use energy and amino acids to produce viral proteins. The process is completed by taking a small group of molecules from the host cell’s RNA and adding it to the viral RNA. In effect, the cell now recognises the viral RNA as human RNA and creates multiple copies of it as well as other viral proteins. This process is completed by polymerase but the exact mechanism for this process is still unknown. The viral RNA is then released by the nucleus and moves towards the cell membrane and ‘wrap’ themselves within a part of the cell membrane to create a new protein coat. But by creating a coat from the cellular membrane of the host cell the virus is preventing its own escape, as the cell membrane contains sialic acid receptors which haemagglutinin binds to, so the virus is bound to the cell membrane. The neuraminidase therefore breaks down these receptors allowing the cell to escape from the cell. The cell would die either from being

exhausted as it can no longer perform its own functions, or the virus would trigger a ‘suicide switch.’ One cell is able to produce millions of influenza viruses which can then go on to infect millions of other cells unless stopped by our immune system. Method of Infection & Symptoms As with all types of influenza it is passed on from one organism to another very easily, simply by breathing the same air as someone who has been infected. The virus would usually be present in our respiratory tract and is expelled from our lungs through coughing and sneezing. The virus remains in the air, in a suspension of salvia, to be inhaled by another organism, and if this organism is the same species, this will cause potential infection, as the virus can then infect cells in the respiratory tract, causing typical flu symptoms. This is the only way to catch the flu, and cannot be caught by eating pork. The symptoms for swine flu are the same as seasonal flu and currently seem to be no more threatening. They include: high fever (over 38°c), coughing, tiredness, chills, aching joints and/or muscles, sore throat, sneezing, headache, diarrhoea and stomach upset. As can be seen, these symptoms are not lifethreatening, and swine flu has only caused serious problems in people with other underlying health conditions. Swine flu mainly attaches itself to receptors in the nose or mouth and therefore does not cause a major health concern such as leukaemia or pregnancy. On the other hand, if it were to mutate to a virus similar to the structure of the H5N1 bird flu virus, which attaches itself to our lungs causing more serious conditions like pneumonia. But scientists still claim that this is unlikely, as H5N1 has been around for more than a decade and not combined with seasonal influenza. Treatment and Prevention Currently the best way to treat swine flu is simple bed rest, like that of seasonal flu as well as taking regular dosages of Tamiflu. The Department of Health recommended to all those with the symptoms of swine flu to stay in their homes for 5-7 days to prevent the spread of the virus. Tamiflu is an antiviral drug that it able to slow down or stop the infection of influenza in the body. Its chemical name is Oseltamivir and it is known as neuraminidase inhibitor. It stops the activity of neuraminidase, therefore does not allow viruses to leave the cell as they bound to the sialic acid receptors. It does this by providing a competitive inhibitor to sialic acid therefore allowing neuraminidase to work on the drug rather than let the virus escape the host cell.


y

A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 47

Scope 2009/10 Biological Sciences

Ozone Therapy in Dentistry Sahil Patel Ozone is an allotropic form of oxygen composed of 3 oxygen atoms. Compared to the more abundant diatomic oxygen (O2), ozone is far more reactive as its structure is very unstable: The three atoms form a vshape (similar to water) and the central atom forms an sp² hybridysation with one lone pair. This condition is usually short-lived before the molecule breaks up and forms diatomic oxygen and an oxygen atom.

Figure 1 The chemical structure of ozone Ozone is most famously known for the ever depleting ozone layer which is where high concentrations of ozone filter out waves with a shorter wavelength than 320nm from sun rays. This includes ultraviolet light which would be much more harmful to humans without a protective ozone layer. On the other hand, ground level ozone released from car exhausts and industrial processes can cause respiratory problems for humans and can affect agricultural production. Although ozone is hazardous when not in the ozone layer, the reason why it is beginning to be used in a handful of private dental practices is its property of being perhaps the most powerful oxidant and antimicrobial agent. This is true whether it is used by itself as a gas or added under pressure in water. In fact, it reportedly has a higher disinfection capabilities compared to chlorine and doesn’t produce harmful decomposition products. This is why ozonators, for example, are being used to reduce the amount of chlorine in swimming pools and spas. It is also being used by the bottled water industry to disinfect water before we drink it and has now been approved by the FDA (US Food and Drug Administration) to treat food.

is ever lost, ozone application automatically stops. After the ozone gas is applied to the tooth through the suction-cap apparatus, it is then sucked out of the cap and channeled through an ozone neutralizing filter that converts the ozone back into oxygen molecules. A liquid pH balancer is the applied. The pH balancer fluid neutralizes the residual bacterial acid by applying xylitol and fluoride in high concentrations. Ozone gas units made for dentists are typically priced at about £12,000.

2. Ozonated water application has been proposed as a cavity disinfectant and as a hard surface disinfectant and to soak instruments prior to autoclaving. Autoclaving is the common technique used by most dental practices to sterilise their equipment by high pressure steam. The effectiveness its uses are unproven and ozonated water units cost about £4,500. 3. Ozonated Olive Oil application has been introduced for soft tissue lesions. These oils are advertised to have a greater advantage over commonly used antiseptics and ointments due to their wide range of activities during all phases of the healing process. Patients are supplied with enough ozonated oil in a disposable syringe for home use. The sustainability in ozone is unclear and controversial as the only people supporting it are private practices and ozone therapy is yet to be approved by the British Dental Association or the NHS for medical applications. A spokesperson for James Hull Associates (a large chain of dental practices offering ozone therapy) claimed that ozone gas application works perfectly 90% of the time against tooth decay. Compared to the traditional technique of injections, drilling and filling, ozone gas treatment takes 40 seconds. The most prolific researcher in this field is Dr. Edward Lynch in Belfast, Northern Ireland. Dr. Lynch has been investigating how best to use ozone in dentistry for over 10 years and has helped bring the first commercially viable ozone device on the market in most areas of

the world, except notably the U.S., where it still awaits approval by the FDA. An advert from HealOzone presents new ways of root canal treatments using a needle to ozonate the infected nerve. This treatment can be completed in one session as opposed to several sessions for a normal root canal treatment. However, most organisations are not convinced by ozone such as the FDA in the U.S. where it argues that ozone can only be effective as a germicide when applied in concentrations higher than that humans can tolerate. So ozone is nothing more than a toxic gas. The British Dental Association also doubts ozone therapy will prove to be useful enough to use worldwide as most dental work is now preventative and cosmetic. Research is still being carried out to ensure that ozone does not cause any adverse effects in the mouth. Operative dentistry There are many dental procedures where ozone has been used. During root canal therapy where the nerve tissue of a tooth has died or is infected, ozone oils can be used to sterilise the root canal systems and to clear the canals of necrotic debris. Ozone oils are ozonated sunflower oil or olive oil or groundnut oil. This ozone oil irrigation is believed by some to be faster and more efficient in canal sterilisation than conventional irrigation by the sodium hypochlorite and sodium peroxide combination. Ozone can be applied to carious lesions, safely killing bacteria that have caused caries, thus making the treatment minimally invasive to the patient and just a few seconds long. In cases of incipient caries, ozone can kill bacteria in the demineralised part and this demineralised enamel then, can be remineralised using a special remineralisation kit, containing Calcium, Fluorine, Phosphorus and Sodium, all in their ionic forms. Periodontal (gum) diseases represent a major

Back to dentistry Ozone as a gas is toxic so the translation of ozone into a product that can be used safely by dentists is important. Currently there are three forms of ozone that are already in use:

1.Ozone gas application involves sealing the affected area with a PVC or silicone cap and applying ozone gas for one to two minutes. The disposable silicone caps fit onto the hand piece and after achieving a vacuum over the tooth, ozone can flow over the site. If the seal

Figure 2 Diagram showing the generic stages of a root canal treatment. 47


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 48

Scope 2009/10 Biological Sciences

concern both in dentistry and medicine. The majority of the contributing factors and causes of these diseases are reduced or treated with ozone in all its application forms (gas, water, oil). The beneficial biological effects of ozone, its anti-microbial activity, implicated in periodontal diseases and its healing and tissue regeneration properties, make the use of ozone well indicated in periodontal diseases. Ozonated oil can be used in conjunction with a metal gauze to compress tissue after an extraction. Although the difference between the two treatments on the healing of the soft tissue is not obvious to non-dentists, ozonated oil has proved to help tissue regeneration compared to traditional antiseptics. Finally, ozone can be used to help in the intermediate stages of dental bridges and crowns. A dental bridge is illustrated on the right and before seating in the permanent restoration, temporary crowns are placed on the two neighboring teeth. A common occurrence during this temporization phase in crowns procedures is hypersensitivity. Many factors might contribute in this symptom, one of which is the presence of bacteria left inside the gap between the tooth and the crown. Ozone gas applied before the temporary crowns and permanent bridge perfectly disinfect the area of bacteria and do not affect the adhesive bonding of the ceramic crown to the tooth. Physiology Although, it is claimed that if ozone therapy is used properly, it is well-tolerated by patients there are still side affects post-treatment. In the short-term patients can feel weakness or dizziness but the reason for extraordinary reactions to ozone therapy is considered an imbalance between the intensity of the formation of active oxygen and the activity of patients antioxidant defense system. The therapeutic doses of medical ozone act like "stress-stimulators" on the defense system enzymes. Earlier I mentioned that ozone gas is toxic to humans as it affects our breathing systems. Ozone molecules will always tend to break down into diatomic oxygen producing a free radical according to the equation: O3 â&#x2020;&#x2019; O2 + Oâ&#x2014;? Free radicals are species that have free unpaired electrons which makes them highly reactive. When ozone is inhaled ozone breaks down into oxygen free radicals which react with the organic lining in alveoli and subsequently react with epithelial cells, immune cells, and with nerve receptors in the airway wall. This causes asthma-like symptoms such as breathlessness and inflammation of airways. After dissolving in the lining of the alveoli, ozone is harmful in the bloodstream as it causes cholesterol to undergo ozonolysis where the double bond in cholesterol is broken and replaced with an oxygen double bond forming 5, 6-secosterol which is involved in the build-up of plaques. 48

Plaques can cause atherosclerosis by sticking to the walls of the arteries and causing them to lose elasticity over time. The lack of elasticity can result in inflammation which is more likely in the presence of ozone as it is a powerful oxidising agent. Ozone production Since UV rays from the sun sustain the ozone layer, the most obvious place to begin creating man-made ozone is UV light. UV ozone generators use a light source that generates a beam of UV light to treat air or water is swimming pools. The cold plasma method involves exposing oxygen to an electrical discharge from two electrodes. The diatomic oxygen splits into single atoms and then recombines as triatomic oxygen (ozone). The concentration of ozone is typically 5%, which seems low but compared to using UV rays it has a much higher yield. Ozone may be formed from O2 by electrical discharges and by action of high-energy electromagnetic radiation. Certain electrical equipment generate significant levels of ozone. This is true of devices using high voltages, such as laser printers and arc welders. Electric motors using brushes can generate ozone from repeated sparking inside the unit. Large motors that use brushes, such as those used by elevators or hydraulic pumps, will generate more ozone than smaller motors. Ozone is similarly formed in the Catatumbo lightning storms phenomenon on the Catatumbo River in Venezuela, which helps to replenish ozone in the upper troposphere. It is the world's largest single natural generator of ozone. Teeth Whitening and bleaching Due to the strong oxidation power of ozone, researchers have started looking at the ability of ozone to whiten teeth. Ongoing in-vitro works are studying the effects of long time exposure of ozone on the dental hard tissues and the pulp, as well as the application forms of ozone (gas - ozonated water). The results so far are publicised to be promising. In root canal treated teeth, crown discolouration is a major aesthetic problem, especially in anterior teeth. Conventional walking bleaching (by placing a bleaching agent inside the root canal to lighten the tooth) requires much more time and results are not usually satisfactory. After removing the root canal filler material from the pulp chamber, the canal is sealed tight at the level of cemento-enamel junction (the line on the surface of a tooth where the enamel on the crown meets the cementum on the root). Then, the chamber is cleansed with sodium peroxide solution to remove any debris, cement particles. Now, a bleaching paste is packed in the chamber and the opening is sealed with the Glass-inomer cement. After placing the bleaching agent in to the inner of the tooth, the crown is irradiated with ozone. This ozone treatment bleaches the tooth within minutes and gives the patient a healthier-looking smile. In conclusion, the consensus about ozone is

ambivalent as it can cause serious health problems with prolonged exposure but researchers are determined to have it implemented into mainstream dentistry. I feel that there is too much optimism when it comes to limitations of ozone therapy. Everyone can see that ozone is very oxidising which would give it use as a germicide but whether it harms the mouth is still uncertain which why most dental practices and the NHS refrain from investing in an ozone therapy. Some private practices champion the use of ozone for all procedures as if it is vital in order to give a successful treatment but it is all still under research. However, research could prove us wrong and show that ozone can be used as effectively as promoters claim.


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 49

The Haberdashers’ Aske’s Boys’ School Nurturing Excellence

Stay with the team! Please make sure we have your contact details whenever you move (your postal address, email, telephone and mobile). • Call us on 020 8266 1820 • Register with Habsonline (top right menu bar) at www.habsboys.org.uk • Post a letter to: Alumni Office, External Relations, Haberdashers' Aske's Boys' School, Butterfly Lane, Elstree, Herts, WD6 3AF 49


A36851 HaberAske Scope TEXT AW:A31822 HaberAske Skylark

12/3/10

10:36

Page 50

Scope 2009/10 Bibliography

Bibliography PHYSICAL SCIENCES The Riemann Hypothesis The Millennium Problems (2002) Keith Devlin

Administration - Earth System Research Laboratory - Global Monitoring Division Geoengineering vs. Gestureengineering, Wired Science (24 July 2008)

Prime Obsession: Bernhard Riemann

The Role of Pharmacogenetics in Modern Medicine Genetic variation in response to 6mercaptopurine for childhood acute lymphoblastic leukaemia Weinshilboum et al - The Lancet 336 - July 1990

The Greatest Unsolved Problem in Mathematics (2003) - John Derbyshire

Visible Earth: Global Effects of Mount Pinatubo - NASA Langely Research Center - Aerosol Research Branch

Nuffield Council on Bioethics Pharmacogenetics: Ethical Issues (2003)

The Role of Computational Atomation in Science

Issue No.2705 NewScientist - 23 April 2009

Cancer Therapy: Treatment & Treatment Realities

http://ccsl.mae.cornell.edu/

The Catastrophist - The New Yorker June 2009 (p.39) - Elizabeth Kolbert

www.nhs.uk/conditions/Herceptin

The Automation of Science - Science 324 (5923) Distilling Free-Form Natural Laws from Experimental Data - Science 324 (5923) Einstein’s Annus Mirabilis Einstein: His Life & Universe (2008) Walter Isaacson

Entropy & The Theory of Evolution The Blind Watchmaker - Richard Dawkins Entropy, Information, and Evolution: New Perspective on Physical and Biological Evolution - James Smith, David Depew & Bruce Weber - 1988

Atmosphere: Earth’s Great Defence

BIOLOGICAL SCIENCES

An Ocean of Air: A Natural History of the Atmosphere (2007) - Gabrielle Walker

Abiogenesis

The Path Towards Finding The Magnetic Monopole The New Quantum Universe (2003) Tony Hey & Patrick Walters Chemiluminescence Chemiluminescence from luminol solution after illumination of a 355 nm pulse laser (1126) - Journal of Bioluminescence & Chemiluminescence

Synthesizing life - Nature 409 (2001) - Szostak et al Evolution of the Nervous System The origin and evolution of the neural crest - Philip C. J. Donoghue, Anthony Graham, & Robert N. Kelsh Bioessays (30/6) - 2008 Alzheimer’s: Hope At Last?

http://www.isbc.unibo.it/

The cholinergic hypothesis of Alzheimer’s disease: a review of progress - Francis et al - Journal of Neurology, Neurosurgery, & Psychiatry - 66 - (1999)

REVIEW

The Implications of Cocaine Dependance

The Independant -Dosage loophole restricts Herceptin for NHS patient Jeremy Laurance (2007) Heart Transplantations: End of an Era? Once again on Cardiac Transplantation: Flaws In The Logic Of The Proponents – Dr. Yoshio Watanabe - JPN Heart Journal (1997) Why heart pumps could kill off the transplant - Sunday Times Magazine, (November 2nd 2008) Mortality Statistics on Bypass Surgery and Angioplasty www.heartprotect.com/mortalitystats.shtml (January 2009) Haemophilia A: How successful hasthe genetic engineering of Factor VIII been? Genetic Engineering of Factor VIII Nature 342 Swine Flu Epidemiology & Pathology www.direct.gov.uk/en/Swineflu/DG_ 177831 Virolution - Frank Ryan (2009) Ozone Therapy in Dentistry

The Rowboat’s Keeling U.S Department of Commerce National Oceanic and Atmospheric 50

Drug Dependence, a Chronic Medical Illness - McLellan et al - JAMA 284/13 - (2000)

The Use of Ozone in Medicine & Dentistry - Baysan et al - Primary Dental Care (12/2) - (2005)


Scope 2009/10 Scope Team

Casey Swerner - Chief Editor

Mr. Roger Delpech - Master i/c Scope

Raj S Dattani - Senior Editor

Bhavesh Gopal - Senior Editor

Neeloy Banerjee - Senior Editor

Johan Bastianpillai - Senior Editor

Wajid Malik - Editor

Sahil Patel - Editor

Karthigan Mayooranathan - Editor

Wei-Ying Chen - Editor

Ameya Tripathi - Editor

Nicholas Parker - Editor

Vishal Amin - Editor

Adrian Ko - Technical Advisor

BAINES design & print 01707 876555 Printed on environmentally friendly paper A36851

The Scope 2009/10 Team


SCOPE

The Scientific Journal of the Haberdashers’ Aske’s Boys’ School

The Haberdashers’ Aske’s Boys’ School Butterfly Lane, Elstree, Borehamwood, Hertfordshire WD6 3AF Tel: 020 8266 1700 Fax: 020 8266 1800 e-mail: office@habsboys.org.uk website: www.habsboys.org.uk

A36851 HaberAske Scope ISSUU 2010  

A36851 HaberAske Scope ISSUU 2010

Read more
Read more
Similar to
Popular now
Just for you