science teacher 2009
Featuring: Sound Babbling brooks Phantom sounds Tuning into volcanoes Listening to space Underwater sound and sonar Acoustics used to count fish Sound and marine geology Plus: Draw a scientist Is your Periodic Table up to date? Make a flame tube And more...
science teacher 121
Mailing Address: NZASE PO Box 1254 Nelson 7040 Tel: 03 546 6022 Fax: 03 546 6020 email:firstname.lastname@example.org
Editorial 2 From the president’s desk 3
Editorial Address: email@example.com Editorial Board: Barbara Benson, Suzanne Boniface, Beverley Cooper, Mavis Haigh, Rosemary Hipkins, Chris Joyce. Journal Staff: Editor: Lyn Nikoloff Sub editor: Teresa Connor Cover Design and Typesetting: Pip’s Pre-Press Services, Palmerston North Printing: K&M Print, Palmerston North Distribution: NZ Association of Science Educators NZASE Subscriptions (2009) School description Secondary school Intermediate, middle and composite schools Primary/contributing schools
Roll numbers > 500 < 500 > 600 150-599 < 150 > 150 < 150
Tertiary Education Organisations Libraries Individuals Student teachers
Subscription $160.00 $105.00 $160.00 $70.00 $50.00 $70.00 $50.00 $160.00 $110.00 $50.00 $25.00
Subscription includes membership and one copy of NZST per issue (i.e. three copies a year), and extra copies may be purchased for $9.00 per issue or $25 per year (3 issues). All prices are inclusive of GST. Please address all subscription enquiries to the NZASE, PO Box 1254 , Nelson 7040. Subscriptions: firstname.lastname@example.org Advertising: Advertising rates are available on request from email@example.com Deadlines for articles and advertising: Issue 122 - Light 20 August (publication date 1 October) Issue 123 - Air 20 December (publication date 1 March, 2010) Issue 124 - Iron 20 April 2010 (publication date 1 June, 2010) NZST welcomes contributions for each journal but the Editor reserves the right to publish articles it receives. Please contact the Editor before submitting unsolicited articles: nzst@nzase. org.nz. Disclaimer: The New Zealand Science Teacher is the journal of the NZASE and aims to promote the teaching of science, and foster communication between teachers, scientists, consultants and other science educators. Opinions expressed in this publication are those of the various authors, and do not necessarily represent those of the Editor, Editorial Board or the NZASE. Websites referred to in this publication are not necessarily endorsed.
contents Feature: Sound Sound and its uses 4 Underwater sound waves and sonar 7 Underwater sound 10 Kiwi birds and hearing 13 Cochlear implants 14 Searching for phantom sounds 15 Babbling brooks 18 Tuning into volcanic vibrations 22 Sounds in space 25 Counting fish using acoustics 27 Sound in marine geological studies 30 Regular features Science education: Draw a scientist 33 History Philosophy of Science: Matauranga Maori, science and truth 37 Just for starters ... Foods for health and wellness 41 Resources: National Library 42 Ask-a-scientist 9, 40, 45, 47 Science News 12, 21 Subject Associations: Biology 43 Chemistry 44 Physics 45 Primary Science 46 Science/PEB 47 Technicians 48
Babbling Brooks – the photograph is of a section of the East Dart River, south-west of England, where the water seems to ‘babble’, sometimes so loudly that conversation is impossible. What causes this sound? Read pages pp 13 to 16. Photograph courtesy of Alan Walton.
It’s all about sound It has never ceased to amaze me that OSH hasn’t shown more concern about the level of noise that emanates from a school laboratory. With wooden floors, few absorptive surfaces, and thirty students coming and going there is certainly the potential for harm to be done to a teacher’s hearing. So now as a sufferer of tinnitus, probably like many of you, it was with some surprise that I read in Grant Searchfield’s article (page 15) that tinnitus is in fact a phantom sound! Fortunately my hearing is not so damaged that I need a cochlear implant (page 14). But my hearing is not as acute as kiwi birds (page 13). So how does sound travel? This issue also has articles about how sound travels in air/space, water and land. I found George Dodd‘s article a good, useful overview of the topic (page 4). Can sound travel in space? Yes and no. Karen Pollard points out that astronomical objects pulse and vibrate, and that is what astronomers ‘listen’ to using asteroseismology (page 25). Meanwhile, as Gill Jolly explains, on Mt Ruapehu seismologists are listening to the seismic and acoustic signals being given off that might indicate an impending eruption (page 22). Have you ever wondered what causes the sound of a babbling brook? In an illuminating article Alan Walton explains that it is caused by air trapped in the water (page 18). There are a number of articles related to sonar and its applications. Peter Gough’s article provides a wonderful introduction to this topic (page 7). Chris Tindle explains how sound travels in water and the use of sonar (page 10). And NIWA explains how they are applying sonar technology to explore marine geology (page 30). NIWA is also using fisheries acoustics to gain an understanding of fish stocks in our Economic Exclusion Zone such as in the Chatham Rise (page 27).
In working with so many fantastic scientists on this and previous issues of the NZST, it was with some interest that I read Miles Barker and Sirinapa Kijkuakul’s education research article (page 33). When asked to draw a scientist, teachers and university science staff in Thailand had quite different ideas about what a scientist looked like and what they did. Their study is quite revealing, and for me it highlights our prejudices and misconceptions about who scientists are and the nature of their work. This might be a worthy exercise to conduct in your next department meeting. Also in this issue is the second of a three-part series exploring Matauranga Maori and Science (page 37). Do read about how to make Paul King’s ingenious flame wave model – every school should have one (page 45). So, along with all our other regular features, this issue has everything you need to know about sound and more. I would like to thank all the contributors and their colleagues (who directly, or indirectly) ensured that this issue of the NZST is a ‘must keep’ resource for all science teachers with its comprehensive range of articles about sound. Thank you – your time and expertise, given freely and willingly, is appreciated by science teachers. The quality and variety of scientific research being undertaken in NZ depends on two things: government funding and a good supply of high quality motivated students. While you as teachers have little influence over the former, you are essential for the latter. May this issue inspire you to inspire your students. Kind regards
NZASE directory National Executive
President: Jenny Pollock, Nelson College for Girls firstname.lastname@example.org Tel: 03 5483070 or 021 129 3174
Biology: Chemistry: Physics: Primary Science: PEB/Science: Technicians:
Junior Vice President: Lindsey Conner, School of Sciences and Physical Education College of Education, University of Canterbury email@example.com Treasurer: Carolyn Haslam, The Faculty of Education, The University of Auckland, Private Bag 92601, Symonds St Auckland firstname.lastname@example.org Administrator: NZASE administrators email@example.com Tel.: 03 546 6022
Jacquie Bay firstname.lastname@example.org Suzanne Boniface Suzanne.Boniface@vuw.ac.nz Dave Housden email@example.com Ian Milne firstname.lastname@example.org Jenny Pollock email@example.com Margaret Garnett firstname.lastname@example.org
For contact details for regional associations visit: http://www.nzase.org.nz/regionalassociations.html
Welcome to you all again. Congratulations to Ian Milne and his teams for the very successful Primary Science conferences that were held in April in the four main centres. They continue to inspire teachers. And good luck to the organisers and committees of the conferences to be held in the July school holidays. I am sure they will be very interesting and worthwhile. Did you know that the NZASE has six separate science standing committees? They recognise the fact that Science is taught at primary and secondary school before branching into separate disciplines in senior secondary school. In addition most regions have branch associations; and most of this work is done voluntarily, with only limited amount of paid part-time administration assistance. In April, the PPTA ran a two-day forum highlighting concerns about workload, and it aims to develop better support for these essential organisations. Our Association and subject associations have two key areas of concern: the need for members to volunteer to be on committees; and to develop contract negotiating and management skills. Advocacy is a key aspect of the work our subject associations undertake on membersâ€™ behalf. Most of the work is voluntary and most of the volunteers are drawn from teachers or Colleges of Education lecturers, who already have heavy workloads. There is only one association that is able to afford a full-time executive officer (and only because of sponsorship), and a few are able to afford part-time administration assistance. Yet the workload for office holders can be enormous at times. Therefore, recruiting new people to the various committees can be a problem. With hectic job schedules and family lives, many teachers or lecturers are unable to take on such positions. And even when there are fewer family commitments, many people have middle or senior management positions and/or elderly parents to worry about. Some are simply too tired and wish to have some evenings and weekends free. There can also be health issues with older people. But these associations need a balance of older experienced people plus younger people with fresh perspectives. One way that subject associations have already managed to lessen the workload is by having revolving executives, which means that no one is an officeholder for longer than a few years. The advantages are that new people come in with fresh ideas and energy, and less than satisfactory people move on relatively quickly. This is
a good model and one that should be retained. However, there are concerns that institutional knowledge must not be lost because it is vital for the day-to-day running of the association and to ensure that the subject flourishes. Therefore, the association must retain the ability to identify personnel who can develop the subject and effect changes when required. As there are fewer Ministry of Education or NZQA officials who have long-term experience of individual subjects within their agencies, this retention of knowledge by the subject associations is crucial, and is evolving into one of its primary roles. This means, therefore, that associations must develop effective electronic or paper storage, plus a system of reporting to incoming officeholders. Another area of concern is the need to develop contract negotiation and management skills within the subject associations. Because the Ministry of Education has stated that they wish to maintain a relationship with subject associations, it is vital that we continue to develop and maintain these skills. To help subject associations, members of the forum and PPTA are putting together contract guidelines that associations can access when required. Contracts such as writing of resources and assessment materials can be a great source of revenue, but also a source of tension with members. While many of us have had experience with smaller contracts, few have had any experience working on contracts such as that for the alignment project. In the future, subject associations will need to negotiate contracts for the development of resources and to advance a subject and its assessment. This can create a source of tension between meeting the conditions of a contract and the needs of members. And it can be difficult, sometimes, for members to appreciate the intricacies of some contractual processes, especially when confidentiality clauses have to be signed. Finally, members are an essential component of the NZASE and subject associations. The greater the membership, the more services we can offer. With this in mind, we want to move towards individual membership so that we can expand our services. We are also considering expanding the number of officeholders. We would like to hear from anyone who is able to contribute to the running of the NZASE, either at regional, standing committee, or national level. Jenny Pollock President NZASE
we need your help!!!!
Join the NZASE today! Did you know that for as little as $50 per annum you can become an individual member of the NZASE? So contact us today and receive your own copy of each issue of the NZST, newsletter plus access to all of our member only benefits. For further information and subscription details email: email@example.com or visit our web site www.nzase.org.nz
sound and its uses Sound is both a physical phenomenon and a perceived sensation, as George Dodd, Acoustics Research Centre, School of Architecture and Planning, University of Auckland explains: We use sound to communicate, for entertainment, and warning of danger; and it doesn’t switch off when we sleep. But what exactly is it? Sound is both a physical phenomenon and a perceived sensation. The terms we use for this are: objective sound (i.e. the flow of mechanical energy in the form of vibrations of a transmitting material); and subjective sound (i.e. what we say we ‘hear’ when a suitable objective sound provides energy to work our hearing systems). Objective sound is a form of energy (merely kinetic and potential energy) that it can be used for doing all sorts of work (as well as exciting our ears), and because it is in the form of waves covering, potentially, an enormous frequency range, it has some rather surprising applications. However, its most familiar connection is with hearing so let’s look at that first.
Sound and hearing
It is a temptation to view our ears as simple microphones where the eardrum – acting as the microphone diaphragm – picks up the vibrations in the air, transduces them into an electrical signal which it then sends to the brain on nerves operating like electrical conductor wires. In reality it is much more complex than this, with perhaps the most surprising difference being that the auditory cortex (the brain region primarily dedicated to hearing) is continuously sending back control signals to the transducer part of the ear (the cochlea) to ‘tune’ it from moment to moment. This tuning changes its behaviour in response to the type of incoming objective sound. What is astounding is that the cochlea – which evolved two to three hundred million years ago – is actually operating primarily as an analogue to a digital convertor! An array of hair cells in the cochlea (each one essentially a 0-1 binary generator – see the Jeremy Corfield and Fabiana Kubke article on page 20 in this issue) produces a complex digitised version of the objective sound signal which, on its way to the cortex and to becoming subjective sound, is further processed in what can be thought of as a form of deltamodulation (as used in sophisticated A/D convertors to simplify the signal during transmission). So ‘hearing’ involves complicated processes, and as a result, the conversion of an objective sound into the corresponding subjective sound is far from instantaneous. This conversion time (or integration time) varies with the type of sound (e.g. speech or music), and also depends on the particular listener and their age, but typically it is in the range of 40–80 milliseconds. The complex and non-linear response of our hearing system fooled the early acousticians (the term for scientists and engineers who specialise in sound) and we have inherited one of their major errors, the decibel (dB) – which is still in general use today! The early part of the nineteenth century saw the beginnings of experimental psychology (then called psycho-physics) and these early experiments suggested that the size of human sensation (e.g. subjective sound) is proportional to the logarithm of
the magnitude of the stimulus (e.g. the objective sound). So pioneer acousticians chose a measure based on the log of the size of the objective sound (i.e. the log of the air pressure variations) in order, they hoped, to produce numerical values which would predict linearly the loudness of the corresponding subjective sounds. We now know that this is far from the truth and it turns out that a power law is a much closer model than a log law. Despite this, the dB remains entrenched largely because of the greater convenience for engineering calculations! For the average normally-hearing person, subjective sound begins at 0dB. But this is not zero objective sound! This starting point for subjective sound corresponds to variations in air pressure of about 20 millionths of a pascal (compare this with a typical air pressure of around 105Pa). This is a minute disturbance in the air that we can detect, and bears witness to how astonishingly sensitive our ears are. It is easy to understand, therefore, why our cochleas are situated in, what are arguably the best protected spots in our bodies! Also, when so little energy is needed to create subjective sound we understand why it is that loudspeakers – and especially those classed as hi-fi loudspeakers – can be tolerated even though their efficiency is rarely more than about 1%. Even though 99% of the input electrical energy is wasted, we still have enough acoustic power radiated to produce loud (usually too loud!) subjective sound.
Sound versus noise A common description of ‘noise’ is that it is unwanted sound, but this is not a formal definition to be found in laws or standards. Many people take the view that it is not possible to define noise because “one person’s music is another person’s noise”. The mistake being made here is the lack of distinction between objective and subjective sound. Any sound can become noise when it distracts us to listen to it against our will – i.e. takes our attention away from our chosen activity. But a more fundamental distinction is recognising the important difference between hearing and listening. Listening implies an additional stage beyond merely hearing. It is the act of attending to the sound, recognising it and thus taking information from it. If we only hear a sound (i.e. there is little or no cortical processing) it is not impacting on our activities, we do not take information from it and so we may describe this as a noise. But such noise, as a background to our activities, can still be useful if it masks (covers up) other sounds which would otherwise distract us. Injecting masking noise through hidden loudspeakers into openplan offices is a common technique to help separate neighbouring workstations acoustically. So noise is not always unwanted sound! So we can formally define noise as: subjective sound which we merely hear, or would choose to merely hear (i.e. would choose not to listen to). This definition allows for sounds which we do listen to (because they force themselves on our attention) to be described as noise, and also for noise environments to be categorised for their severity based on the degree and the amount of time they distract us.
science teacher 121
Figure 1: The interconnected Reverberation Chambers at the Acoustics Research Centre, University of Auckland.
Figure 2: View into the Anechoic (non-reflecting) Chamber at the Acoustics Research Centre.
Thus, legislation or by-laws against noise intrusion are too simplistic if set purely on the basis of a sound level not to be exceeded. However, in some cases setting a level which would ensure inaudibility would clearly work. This then, invites the question how do we insulate against sound to reduce its strength?
However, the major feature for simple walls is to have as much mass as possible. The heavier we can make a wall the smaller will be the movement induced – by Newton’s law – and hence the less will be the amount of re-radiated sound.
Materials for controlling sound
Science programmes on the television have made people aware of the technique of so-called active noise control, where noise is shown as being ‘cancelled’ by an inverted or mirror image sound wave mixed with it. This has the unfortunate result of implying that two streams of energy (i.e. the noise and the anti-sound waves) can annihilate each other in contravention of the conservation of energy principle. Practical applications of the idea take a variety of forms, but in each case energy is definitely conserved. In some cases the antisound blocks the movement of air molecules so that the noise is reflected away, in other cases the source itself is blocked from moving so that it is incapable of radiating its energy as sound. Whilst this idea is extremely attractive because it requires in principle nothing more than sound itself to prevent noise transmission (it was originally patented by Germany before World War II and classed as a military secret because it was going to render their approaching army silent!) it has largely failed to realise its hoped-for potential. The major reason for this is that to silence a noise, the anti-sound source must be physically in exactly the same position as the noise source, and clearly this is impossible. Arrangements which achieve approximations to this can produce noticeable cancellation but it is either limited to low frequencies or restricted to a small region in space. A simple experiment to demonstrate and explore the principle is to take the two loudspeakers of a stereo hi-fi system and invert the connections to one of the loudspeakers. With the amplifier driving them switched to mono and playing music (or, even better, white noise) then gradually move the loudspeakers towards each other. As they get closer together we hear the volume dropping with the low (bass) frequencies disappearing first. A new development which is the subject of research around the world – including New Zealand – is the idea of creating structures which use the wave nature of sound to diffract it or to produce destructive interference of itself which can then be used for insulation purposes. This is done by fashioning the detailed internal structure of non-homogeneous partitions (e.g. composite materials with periodic arrangements of different masses). This may eventually lead to the possibility of
At the University of Auckland’s acoustic chambers we measure the ability of materials and constructions to either absorb or insulate against sound. It is a common misconception that these are the same process; although it is true that sound absorbing materials do actually take in some sound energy and – by virtue of converting it to heat – not let it out again. The efficiency of this using practical thickness of material is way too small to make the reduction of sound useful for insulation purposes. Our TV advertisements which proclaim the virtues of batts of porous materials as sound insulators are quite misleading. These porous batts, despite their strictly limited ability to dissipate sound energy are, however, able to suppress resonances in wall structures successfully. These resonances would otherwise lower the inherent ability of the walls to insulate against incident sound, and it is for this reason that absorbing material has a useful role to play in insulating against sound. The principle of insulating against sound is a simple one – reflect the sound away from the place we wish to protect. It is interesting how we describe sound we hear on the other side of a wall as “coming through” the wall has often led to the concept that the sound waves have somehow percolated between the molecules of the material of the wall to re-appear on the other side. The rider to this is that the more difficult the process of percolation, the more insulating the wall will be. In fact the truth is quite different from this, and we hear the sound because the wall is actually being moved by the force of the incoming sound waves. The resulting vibration of the wall causes it to reradiate sound – just as if it were the diaphragm of a loudspeaker – on the other side. So the more difficult a wall is to vibrate the more insulating it will be. If the sound waves cannot transfer their energy into wall vibration they will be reflected. The design of insulating walls becomes a quest to produce constructions that resist being vibrated. At this point things become rather complicated by the fact that sound, at frequencies we can hear, spans a large range of wavelengths (roughly 20mm to 12m) and wall vibrations can happen in different ways at different wavelengths.
acoustic ‘cloaking’ which – as with its parallel in light waves – renders an object ‘invisible’ to sound when diffraction around it is made complete.
Lesser known applications for sound Because sound is mechanical energy transmitted by waves, some intriguing and, perhaps, surprising uses have been found which make use of these features. For example, sound waves of very high frequency (into and beyond the megahertz range) can be created on the surface of special materials to make components for filtering and processing signals (see Figure 3). These are known as Surface Acoustic Wave (SAW) devices and they have significant advantages over electronic components in certain applications. Every time you turn on your mobile phone or colour TV you are using a SAW device. SAW ‘tags’ are nowadays so cheap to produce that they are routinely included in packaging for remote tracking of products and luggage.
cancers so ultrasound devices are being developed to do this job. The advantage is that the heating can be applied without needing to make incisions into the body. The ultrasound is focussed so finely that intense heating is targeted onto very small areas. The sound energy enters the body spread over a large enough area so that intensities are too low to cause significant heating, whilst at the focus of the beam the intensity reached is high enough to produce the high temperature required. Now being termed as the ‘acoustic scalpel’ these devices are under development as tools for surgery in new areas (e.g. ablating tumours in the liver) and for near-instantaneous sealing of injured or damaged blood vessels. A development which we may see coming into daily use in our homes is that of the acoustic fridge which, as in conventional refrigerators, creates a temperature gradient to move heat. It is usual to consider sound waves as simply producing patterns of pressure variation in the air, but this pressure variation is not possible without a number of other accompanying effects. As the air molecules are moved their density is changed, they are given a velocity and accelerated, but since all this happens rapidly the temperature is modulated too (at frequencies higher than a few hertz the process is essentially adiabatic). By creating a standing wave in a tube at a high intensity we can produce a large enough temperature gradient to move significant amounts of heat energy. This is the principle of the acoustic fridge which – because it has no moving parts save for the sound wave – was originally developed for cooling satellites in space where there is little chance of access for maintenance and repairs. (a)
Figure 3: Surface acoustic waves on a crystal of tellurium oxide, coated with a thin gold film of thickness 40nm. Image courtesy of the Applied Solid State Physics Laboratory, Division of Applied Physics, Graduate School of Engineering, Hokkaido University, Sapporo, Japan.
However, the more surprising developments are where sound energy is providing new medical treatments and new forms for conventional machinery. Medical uses of ultrasound for imaging inside the body are well-known. The sound waves reflect off internal structures and the reflection pattern is processed into a video image allowing us to see the detail of those structures. What is important here in governing whether the sound is transmitted or reflected is the acoustic impedance of the tissues and bones. This is entirely analogous with electrical signals. Where there is a difference of impedance the sound is reflected – the greater the impedance difference the stronger the reflection. This same technique is used for ‘seeing’ underwater where it is too murky for light, and in seismic imaging for underground prospecting for minerals and oil (Refer to Peter Gough’s article on sonar, page 7). In the latter case the sound is of a rather different strength and spectrum. Perhaps the most exciting developments are where ultrasound is being used at high intensities in medicine. Brief bursts of ultrasound at high energy focussed onto kidney stones can be used to create shock waves and break up the stones which become pulverised into powder and then excreted by normal processes. If the ultrasound is continuous instead of used in brief bursts its dissipation by body tissues can result in significant heating. Heating is the basis of cauterisation for sealing blood vessels and for cutting off blood supply to tumours and
Figure 4: Acoustic refrigerators: (a) The original Space Acoustic Fridge which made its maiden voyage into space in January 1992, (b) Small scale acoustic fridge developed at the Laboratory of Acoustics, Université du Maine. Image (a) courtesy of NASA and Jay Adef & Tom Hofler of the Naval Postgraduate School, Monterey, California, USA. Image (b) courtesy of LAUM, Université du Maine, Le Mans, France.
Conclusion In conclusion, we have seen that whilst sound is indispensible for its role in communication and in entertainment, it also has other important uses in our lives. These arise from the fact that (a) it is an energy flow, (b) it exists in the form waves and (c) at high enough intensities it can significantly change the physical properties of materials it travels in. To control the flow of sound we rely principally on blocking its path with objects of sufficient inertia (mass) to reflect the energy away and, to a much lesser extent, on materials which dissipate it into heat. For further information contact: firstname.lastname@example.org
How can we see underwater when the visibility at optical frequencies is so poor? Professor Peter T Gough, Acoustics Research Group, University of Canterbury, explains: The brightly coloured fish swims up to the camera lens, looks puzzled at the out-of-world apparition, then gracefully moves away into the coral. We have all seen wonderful natural history videos of our underwater world on the Great Barrier Reef or off the Poor Knights Islands. However, for most of the world’s shallow-water environments, it isn’t like that at all. High-volume rivers bring down massive loads of fine silts into the estuaries and harbours, clouding the water until, in places like Lyttelton Harbour, you can seldom see more than one metre in front of the lens. So how can we see underwater when the visibility at optical frequencies is so poor? Well, the answer can be found by looking at marine mammals that use sound to ‘see’ as well as to communicate, underwater. I use the word ‘see’ in inverted commas as they don’t see with sound the same way as we do with visible light. Instead, they form a mental image of the environment by transmitting short pulses of directional sound (we call them ‘pings’ mainly because that is exactly what they sound like) and using their ears as receiving apertures to locate the direction and range of the reflected echoes. The visual equivalent of this process would be to stand in a darkened room with a hand-held torch that flashed (i.e. pinged) for say 1/10 sec every second. By sweeping the torch around in circles and holding a mental picture of the room from all the previous flashes, a full appreciation of the room and its contents could be obtained. But that rather limited analogy does not explain exactly how we ‘see’ with sound. First, we need to explain how sound (i.e. changes in ambient pressure) travels through a medium such as water.
How does sound travel in fluids? Acoustical energy is able to propagate through materials such as water (indeed any fluid) even if it is completely opaque to visible light because sound is quite different to light in the mechanism of its propagation. Imagine a huge 3-D stack of marbles arranged in a cube with six little springs holding each of the marbles in place relative to its six nearest neighbours. If we now push against the marbles that make up one side wall, the adjacent horizontal springs compress along the line of the original displacement, these marbles are horizontally displaced slightly away from the side wall. These, in turn, compress the springs on the downstream side which push on their nearest neighbour which in their turn push on their nearest neighbours until the original horizontal displacement reaches the opposite side wall. This is how sound travels in fluids (and in solids) so we need to know the density (the weight of marbles per cubic metre) and the spring constant (the bulk compressibility) to determine how fast a disturbance will travel through the bulk material. For example, sound travels through water at about 1500m/s and through air (with a lower density of marbles and much weaker springs) at about 330m/s. Of course both these numbers are temperature and pressure dependent which is why the speed of sound
underwater sound waves & sonar
in water changes with depth and the speed of sound in air changes with altitude. Waves that travel by compressing the springs along their length are called compressional waves because the underlying mechanism is to compress the springs. For fluids, only compressional waves exist, but for solids it can get a lot more complicated with the possibility of shear waves as well. To understand how shear waves propagate, consider again our cube of marbles model; this time as a ‘solid’ with heavy marbles and very stiff springs. Now imagine that instead of pushing on the side wall, we somehow shift the whole side wall up and down in a vertical displacement. All the adjacent springs to the displaced wall will stretch slightly pulling the next vertical layer of marbles up and down which stretches their springs etc. and the vertical displacement travels across the cube. This is known as a shear wave since this is the underlying mechanism that transmits the original displacement. Shear waves travel very slowly compared to compressional waves and die away more rapidly. The combination of shear and compressional waves is of great value to acoustical engineers and scientists in geology who often use explosives to generate a whole collection of different sound waves and then try to interpret the multiplicity of overlapping compressional and shear wave echoes to see into the underlying structure of the Earth’s crust.
Types of sonars and imaging But back again into the water. Engineers have built a range of different sonars (Sound Navigation and Ranging) with widely different applications. The simplest is a listen-only sonar that analyses the incoming sounds and tries to extract something useful from them. These are usually known as ‘passive sonars’ and the first recorded use of a listen-only passive sonar was by the great Italian engineer Leonardo da Vinci in 1490. The description of the sonar operations in the thriller ‘Hunt for the Red October’ was pretty close to the mark with the skilled operator detecting the vaguest suggestion of propeller wash in the general cacophony of underwater noise. Folklore has it that a skilled operator could tell you the make and model number of the submarine from the unique noise made by its propeller. Active sonars Although passive sonars may be the type most in the public consciousness, active sonars – where the sonar transmits a continuous sequence of identical pings and receives all the echoes from it own transmissions – are far more common. One of the fundamental parameters of any active sonar is the angular width of the beam of sound that is transmitted. Recall the torch analogy and you may also recall from the collection of torches most of you will have in your home, that the torch with the biggest reflector is the one with the narrowest longrange beamwidth. A small aperture penlight sprays light over a wide area whereas a large aperture spotlight throws a narrow beam a long distance. Not surprisingly, sonar projectors (underwater loudspeakers) are no different; the bigger the aperture, the narrower the beamwidth and the higher the
power density at the reflecting target location. Once it is placed within the illuminating beam of sound, the target intercepts a fraction of the transmitted energy and sprays it all around with some small fraction of it reflected back towards the sonar. These echoes are picked up by the sonar’s hydrophone (an underwater microphone), turned into electronic signals and by measuring the time delay from the onset of the transmitted signal to the onset of the received echoes, we can determine the distance or range to that target, and knowing where the sonar projector is pointed, we more or less know the direction of the target. Once all the echoes from targets located at the furthest range have died away (which might take a second or so), we can transmit another ping and repeat the whole process. The rate at which the pings are repeated (the ping rate) depends on the distance of the target of interest. Sperm whales hunting squid, for instance, use a low ping rate (perhaps one ping every two seconds) while they are blind searching and then up the ping rate to perhaps ten per second once they detect and then home in on the squid. There is a simple mathematical formula that predicts the signal strength of the echo given the range to the target, the acoustical power of the transmitted ping, the size of the transmitting and of the receiving apertures and the acoustical reflectivity of the target at the frequency used. Actually this formula was originally developed for active radar, the electromagnetic equivalent of, and precursor to, active sonar (although to be fair some would argue that sonar came first with the evolution of bats and marine mammals). Side-scan sonars Active sonars that look ahead or are steered around are not surprisingly called forward-looking, or sectorscanning, sonar. Another frequently used sonar geometry is to mount the sonar to look horizontally sideways from a platform (which may be towed behind a boat or mounted on a submarine, but more frequently on an Unmanned Underwater Vehicle (UUV) in which case it is called a side-looking or side-scan sonar which is designed to produce an image of the seafloor. Strictly speaking, we are producing an optical image that represents the acoustic reflectivity of the seafloor which is not quite the same thing. What we are attempting to do is to form an image that you might see with a camera looking down on the seafloor if somehow we could strip the water away and illuminate the area of interest from the side. It would appear to be the seafloor equivalent of an aerial photograph and doesn’t appear to have a biological equivalent. For such an imaging sonar, it is easier to think of the sonar as moving in discrete steps (the stop-and-hop scenario). The sonar sends out a single ping, receives all the echoes from close in to the furthest range and then hops to the next position along the track before it sends out its next ping. It is also common to define the y axis as the ‘along the track’ dimension (i.e. in the direction of travel) and the x axis as the ‘across the track’ dimension (i.e. in the direction of the acoustic propagation). So if we think of a two-dimensional image in x and y, we can associate a y value with a position of the side-looking sonar and thus the echoes for each transmitted ping will plot out a line of pixels in x with the pixel’s x position being delay time (and with suitable scaling, range in m) and the pixel intensity being proportional to target size (or target strength as it is more properly defined). Now a typical image display such as a TV or a computer monitor would typically have 1200 pixels in x and 1000
pixels in y, so it is not surprising that many side-looking imaging sonars chop up the continuously produced image into discrete blocks of this size. An example of the image that we get from a seafloor is shown in Figure 1. The seafloor in this image was a more or less flat sandy base with a few man-made objects sitting on the sand or even slightly buried in the sand.
Figure 1: A typical side-scan sonar image. Note the shadows cast by the objects sitting proud of the seafloor. One of the long-term results of warfare, or even civil unrest, is often the widespread indiscriminate distribution of landmines. Innocent civilians are often being killed and maimed by unexploded land ordinance long after the event itself. Well, some sea lanes are equally vulnerable and the mining of international trade ‘choke points’ such as the Hormuz or Malacca Straits has the potential for horrific human and economic consequences. Mine detection and localisation would be one of the obvious uses of a side-looking imaging sonar, although, like aerial photography, general area mapping and resource monitoring has significant value to individuals and to countries. One important benchmark of this mode of operation is how much area the sonar can image in a fixed period of time. This benchmark is called the mapping rate, and it is determined by the boat speed (usually around 1.5 to 5m/s) multiplied by the maximum distance the sonar can detect echoes before they fade into the background noise (and detection theory is such that the signal of interest needs to be about 3 times larger that the RMS noise for reliable detection). So we transmit a certain amount of energy in each ping. That energy spreads on the way out by a factor of the distance squared; some of this energy hits the target of interest and spreads on the return echo path by the same distance squared. Since the overall spreading loss is now distance to the fourth power, it can be seen that the echo strength dies away very rapidly with increasing distance. Eventually the echoes fade into the noise. Mostly the ambient sea noise is at lower frequencies (shipping noise, surf action etc.) but some noise is generated in the frequencies employed by side-looking sonars, and by far the worst is that generated by clouds of snapping shrimp. Fortunately Lyttelton is far enough south to avoid this problem, but Sydney Harbour is notorious; it is so bad there that when the snapping shrimp come out to play, it is time to take the boat home for the day. Waveform of transmitted pings An area of great research interest to the underwater sonar community is the exact waveform of the transmitted pings and the frequencies used inside
sonars has increased the occurrences of strandings. Again emphasising I am not a specialist is cetacean biology, I can only report that whales and dolphins do not appear to be disturbed in any way by the low-end depth sounders used by most pleasure, and all commercial, fishing craft; but whether we can be so confident about the use of geophysical sonars and military sonars, both transmitting at extreme acoustic power levels, is uncertain.
science teacher 121
the ping, and here we are well behind our cetacean relatives. The simplest waveform to us to create and transmit is a short pulse of a constant frequency (either eight or ten cycles at 100kHz is typical). In this case, the resolution in range is more or less the same as the physical length of the pulse in space and,in this example,about 15cm. However, a more sophisticated waveform is the swept frequency pulse, often called a chirped pulse; again because that is exactly how it would sound at audible frequencies. A chirped pulse lasts a lot longer than a simple ping, and the frequencies change dramatically within the duration of the pulse (perhaps with 1000 cycles starting at 90kHz and ending at 110kHz). This ‘chirped’ waveform greatly increases the energy pumped into the water without a peak power increase, but it comes at the expense of a significant amount of processing necessary to interpret the echoes after they have been detected. It is only recently that computing power has enabled us to do this type of processing ‘on-the-fly’ but cetaceans and bats have always done it. In fact bats have the most amazing sonar capabilities with huge frequency ranges and unbelievably precise angular resolutions to levels I suspect humans will never achieve. The fact that they can navigate and use collision avoidance strategies when they congregate in the hundreds of thousands, if not millions, is truly staggering. Watching uncountable numbers of close formation bats flying from their caves in Mexico and head north at nightfall is one of the great natural wonders of the acoustics’ world. (In an entirely irrelevant aside, some US scientists figured out that Mexican bats eat many tons of insects every night that would otherwise make it impossible for Texas to grow crops such as cotton.) A question I am asked frequently is the reason for whale and dolphin strandings. I should make it clear that I am not a specialist on cetacean biology and there is clearly a range of possible reasons; some biological and some physical. However, from a signal processing perspective, we can certainly surmise that in very shallow water, their active sonars somehow supply contradictory signals that the mammals cannot interpret. (Coming back to my flashing torch analogy – imagine trying to navigate down a cluttered darkened hallway when the side walls are made of mirrors.) Once the pod is trapped on the beach, the animals panic and cannot distinguish the inland from the seaward direction. Another related question is whether the human use of
Underwater communications using sound Although I have concentrated on the active sonar aspect of underwater sound, there is another equally fascinating topic of underwater sound; that is underwater communications using sound. Many years ago scientists made some recordings of humpback whales calling to each other using low frequency waves. By modulating the sound frequency and amplitude, as well as using infrasound frequencies, the whales could communicate with each other over huge distances; although with increasing human-generated shipping noise, this must be becoming more difficult. In their modulated form, the calls sounded so haunting that it was tempting to anthropomorphise and call the sounds beautiful and liken it to singing. However, it was just as likely to be a homing beacon or a more prosaic “I am over here” type of sound. Engineers, too, are now using sound to communicate underwater. The acoustic environment is pretty horrible for reliable transmission and reception, so the techniques we use are very similar to those techniques used by cell phones deployed in a cluttered urban environment. The original signal is first digitally encoded, transmitted through the water, received and then decoded, hopefully without error, to replicate the original signal. Most experimental systems seem to be using a form of modulation called quadrature phaseshift keying (QPSK) and, in its simplest form, seem to be getting about one error in 1000 – fine for speech but not so good for data which may need to be retransmitted every time an error is detected. In conclusion, clearly this short article can do no more than touch upon the fascinating topic of underwater sound. For those who wish to know more, Wikipedia is a great start for general knowledge with the usual provisos about accuracy, and Google Scholar is also valuable as a resource for the more academic research that has been peer reviewed. For further information contact: email@example.com
ask-a-scientist createdbyDr.JohnCampbell The 5000 year-old-man, the `ice mummy’ was dated by carbon-14 dating. What is this and how does it work? Edward Winter, Heaton Normal Intermediate School. Scientist Tom Higham, a radiochemist then with the Radiocarbon Dating Laboratory at Waikato University, responded: A living organism is constantly incorporating carbon into its body through food uptake, which builds bones, skin and hair. A small part of the carbon we consume is called `radioactive’ carbon or simply radiocarbon. Radioactive means that it has an unstable atomic structure. This means that after a certain period of time, carbon-14 decays or disappears. As long as a living organism is taking up carbon, it is keeping the carbon-14 in its body at a constant level, but when death occurs, the carbon14 begins to disappear and is not replaced. In the 1950s,
scientists discovered that carbon-14 disappears at a known rate. They found that every 5568 years, half the carbon-14 left in the remains of an organism has gone (the radiocarbon `half-life’), so by measuring the carbon14 remaining, they were able to calculate independent ages for carbon samples from archaeological sites. We can date carbon samples from today, back to about 60 000 years ago using this method. Tiny fragments of the Iceman’s bone, skin and grass from his boots have been dated in two radiocarbon laboratories, in Oxford and Zurich. (A piece of grass or skin about the size of one grain of rice is needed for a date). The dates placed the age of the Iceman between 3400-3100 B.C., or about 5500 years ago, the oldest example of a well-preserved human body ever found. For further information: firstname.lastname@example.org
underwater sound Many sea creatures have evolved to use underwater sound to find their way around and to find their prey, as Associate Professor Chris T Tindle, of the Physics Department at the University of Auckland explains: We are all familiar with sound in air, but we may not realise that sound travels more freely in water than it does in air. Sound is a series of pressure waves produced by vibrating objects. The sound radiates away from the source, and in air the pressure changes are detected by our ears. Our ears do not work well under water but we can hear loud sounds such as an outboard motor. Underwater sound is readily detected by an underwater microphone called a hydrophone, and it is immediately observed that the sea is very noisy. Sound waves are reflected at the surface and bottom and scattered by objects and surface waves. There is very little attenuation and underwater sound can travel great distances. Many sea creatures have evolved to use underwater sound to find their way around and to find their prey.
Measuring sound Echo sounders are now well developed and are used routinely to measure water depth. The active element in an echo sounder is a piezoelectric material such as barium titanate. A piezoelectric material changes its dimensions in response to an applied voltage. Echo sounders apply an oscillating voltage to the piezoelectric element which vibrates and puts a narrow beam of sound waves into the water. Such an element is called a transducer because it converts one type of useful signal to another type â€“ in this case an electrical signal is converted to a sound signal. The sound pulse is reflected off the bottom and travels back up. Transducers often also work in reverse, and in this case the same transmitting element is used to convert the received sound pulse back into an electrical signal. The time delay between transmitted and received pulses is measured and converted to water depth assuming a sound speed of 1500m/s. Since the sound pulse also reflects off anything in its path, echo sounders are now routinely used as fish finders. In other applications arrays of transducers lead to highly directional beams and can be used to produce detailed maps of the sea floor or to find objects on the seabed. Many marine creatures have highly developed echolocation. Dolphins can navigate and find prey even in turbulent muddy water. Sperm whales hunt squid at great depth using echolocation. The huge head on a sperm whale is a giant acoustic lens system which acts like an acoustic searchlight to produce a narrow beam. Weddel seals live year round under the sea ice in Antarctica. They must surface to breathe and so they use echolocation to navigate and find their breathing holes. Because they are able to open their mouths extremely wide if the breathing hole has frozen over while they were away, they are able to chew their way upwards through the ice.
many echoes. In stormy weather the main source of noise is rain and waves breaking in the open ocean and on the shore. The actual source of noise is the vibration of bubbles produced by the breaking waves and rain. Bubbles oscillate by expanding and contracting and have a characteristic frequency for each size. Many of the bubbles are very small and invisible to the naked eye but are still a strong source of sound while they are vibrating. In calm conditions there is often low frequency background noise below 200Hz due to distant storms and shipping because low frequency sound travels enormous distances. In quiet conditions near the coastline there is background noise due to snapping shrimp and sea urchins. Snapping shrimp have an oversize claw, about one third of their body weight, which they snap shut to make an acoustic shock wave which stuns their prey. It was initially thought to be a mechanical shock due to the sides of the claw colliding. However, high speed photography showed that the rapidly closing claw squirts a small jet of water so fast that it creates a void or cavitation bubble behind it. The collapse of the cavitation bubble generates the shock wave. The individual snaps are very loud and are often heard in small boats anchored in quiet harbours. They sound like short single taps on the outside of the hull. Even though the shrimp are all on the sea bottom, up to 20 or 30 metres away, the sound is easily heard inside the boat. The shrimps are common in harbours and along rocky coastlines worldwide. They are active all the time but they are most active just after dusk. Near rocky coastlines there can be so many snapping shrimp that in underwater sound recordings there is a constant background hiss like fat in a frying pan. For a video of snapping shrimp visit: http://www.youtube.com/ watch?v=ONQlTMUYCW4.
Research projects A team at the University of Auckland is studying the sound made by sea urchins. A specimen of Evechinus chloroticus, the most common urchin at the study site is shown in Figure 1.
The sea is very noisy 10
The sea is very noisy because there are many sources of sound and the sound reflects off everything to produce
Figure 1: A specimen of Evechinus chloroticus. Photograph courtesy of Craig Radford.
Pressure (x104 µPa)
Figure 2: Sound pressure as function of time for a single sea urchin scrape. Reproduced by permission (see Reference) .
Spectrum Level (dB re 1µPa2/Hz)
The frequency of the sound pulse depends on the size of the shell. Bigger shells give a lower frequency. When urchins with a typical variety of sizes are feeding they give rise to noise in the 800–2400Hz range. Urchins are most active for a few hours after sunset on dark nights, and the noise level in the 800–2400Hz range can increase by up 30dB corresponding to a factor of 1000 increase in intensity. The effect is strongest during a new moon when the skies are darkest. A typical result is shown in Figure 3. There is a change of about 20dB between 6 p.m. and 8 p.m. in the range 800–2000Hz with a smaller difference extending out to about 5kHz. The recording was made near a reef just north of Goat Island in the Marine Reserve on the East Coast about 100km north of Auckland.
shrimp-like larvae and grow in the open ocean. For the next stage of their life cycle they need to live in crevices and holes on rocky reefs and shores. When the larvae are big enough, they actively swim towards the shore and then to rocky reefs. The puzzling question is: how do creatures a few centimetres long know how to find the shore when they are up to 100km away? We postulate that some have evolved to use urchin noise as a beacon to show where there is a rocky coastline. Proving the hypothesis is difficult. Choice chamber experiments have been performed in which freshly captured larvae are placed in a long chamber with a sound source at one end. The results show that larvae will swim towards recordings of reef noise. Other experimenters have measured the hearing threshold of larvae. The team would now like to show that urchin noise is loud enough at a sufficient distance to be useful as a beacon. In a recent experiment, an underwater loudspeaker was used to transmit a recording of reef noise. The loudspeaker was similar in volume to a transistor radio, and yet underwater it was audible at 8km, even though there was background noise from waves and distant reefs. Reefs with many urchins are much louder than the underwater loudspeaker, so it is likely that a reef would be identifiable as a sound source at distances in the order of 50km offshore. In very deep water there is a sound speed minimum at about 1km depth, which forms a channel for sound called the SOFAR channel (Sound Fixing and Ranging channel). The speed of sound increases near the surface because of the increase in temperature, and it increases with depth because of the increase in pressure.
science teacher 121
Urchins have five-fold symmetry and their five white teeth can be seen around the edge of the circular mouth. Urchins feed by scraping algae off the rocks. Underneath the spines, the urchins have a hard shell. When the urchin is feeding, each scrape sets the shell ringing briefly like a bell with a fairly well defined frequency. A typical measured sound pulse is shown in Figure 2. The pulse has about 6 oscillations in about 7ms and corresponds to a frequency of 860Hz. The recording was made by Craig Radford, who was a PhD student at the Leigh Marine Laboratory of the University of Auckland. Craig was able to get single urchins to feed in a tank and recorded many individual scraping sounds.
Modelling sound in water Sound speed is typically 1470m/s (at the minimum) and up to 1525m/s at the surface. Even though this change with depth is only a few percent, it causes refraction or curvature of the ray paths, which enables sound to travel enormous distances. At low frequencies there is negligible attenuation, and provided there is no land to block the path, sounds have been detected at up to 16000km.
Figure 4: Ray trace in deep water to a range of 105km.
Figure 3: Sound level as a function of frequency at 6 p.m. (sunset) and 8 p.m. Reproduced by permission (see Reference)
The team is also investigating whether urchins play a role in a puzzling biological phenomenon. Several species of marine animal, such as some crabs and lobsters, begin life as eggs, which drift out to sea. The eggs develop into
Figure 4 shows the paths of a fan of rays (up to ±14˚), which leave a source near the depth of the sound speed minimum at 800m. The ray angles are exaggerated because the scales on the axes are different and the range is much greater than the water depth. The rays cycle up and down. There is high intensity where several rays cross. There are also shadow zones where no rays pass through. Our everyday experience is that sound dies away the further you are from the source. For underwater sound this is not true. A receiver at 500m depth 7km from the source depicted in the diagram would detect a strong
sound, as the receiver moves to 9km range the intensity would increase and then decrease to nothing. The source would not be heard again until the receiver reached 28km range. This rapid variation of sound level with distance in deep water and the existence of quiet regions was very puzzling until the sound speed structure was understood.
Figure 5: Modelled waveforms at 105km range for receivers at the depths shown.
Recordings from long-range transmissions in deep water have a characteristic pattern as illustrated in Figure 5. The diagram was found by computing the ray paths and arrival times for a receiver at each of the depths shown. Rays at large angles relative to the horizontal go deepest and arrive first because they spend more time in the higher speed regions near surface and bottom. The pulses between 70.5 and 70.6s in Figure 5 correspond to
a ray path which leaves the source heading downwards at about 13˚ to the horizontal then turns back up, turns again near the surface and travels back down to the bottom before turning back up again to travel upwards past the receivers. The pulses between 70.7 and 70.8s are from rays which leave the source at ±12˚ and arrive almost simultaneously at 800m depth. The later arrivals are much larger because they are a combination of several ray paths at small angles which travel near the sound speed minimum and arrive last. Receptions such as those shown in the diagram can be used to monitor ocean dynamics because each of the early arrivals has come along its own unique path. Any change in that arrival time must be due to a temperature change in its path. If only one path is affected the change has occurred in a region sampled only by that path. If several paths have changed, then computer modelling can determine the new temperature structure and the ocean dynamics can be deduced. This process of inverting travel time data to deduce temperature structure is called tomography and is similar to that used in medical imaging. Visible light and other forms of electromagnetic radiation do not propagate in seawater and cannot be used for the transfer of information. The foregoing examples show that sound propagates freely under water and offers interesting practical ways to study the ocean and its inhabitants. For further information contact: email@example.com
Reference Craig Radford, Andrew Jeffs, Chris Tindle, & John C. Montgomery. (2008). Resonating sea urchin skeletons create coastal choruses. Marine Ecology Progress Series, 362, 37-43.
no excuse for excess nitrogen applications Nitrogen applications to potato crops can be managed accurately for optimum yields and to reduce nitrogen leaching, a Plant & Food Research scientist told the international potato industry members gathered at the 7th World Potato Congress in Christchurch. The Congress was held in March 2009. Dr Brown says successful nitrogen management is the application of just enough nitrogen to ensure crop yield is not limited. “Too little nitrogen will result in lost yield and too much has the risk of nitrogen leaching which is a waste of money and an environmental hazard,” he says. Key to getting it right is finding out the level of nitrogen in the soil in which the crop is to be planted. Each season the nitrogen soil levels vary from paddock to paddock, and it is recommended growers measure levels in each paddock at the start of each season. This involves testing soil from 60cm below the surface. Testing nitrogen levels is not new, although in potato crops until recently, it was more common to test nitrate levels in the plants themselves. However, the information gleaned is not nearly as useful as that attained from the soil.
Accurate soil tests give a start point for calculations of nitrogen movement in and out of the crop during the growing season. Models have been developed for day-to-day changes in nitrogen uptake by a crop and the leaching that may take place, says Dr Brown. Factors that influence uptake and leaching are specific to the paddock and season so nitrogen calculations should be done for individual crops, he says. Crop management tools like the Web-based potato calculator are flexible to take the crop-specific information in any given season and calculate crop needs for defined yields. The demand for accurate nutrient applications coupled with the advent of tools like the potato calculator means the days of fertilising by recipe, or making decisions on inaccurate tests, are nearly over says Dr Brown. Growers already using the potato calculator are saving money on fertilisers and don’t risk nitrogen leaching into waterways. For more information contact: BrownH@crop.cri.nz
What is the hearing range for a kiwi? Jeremy Corfield and Fabiana Kubke, Department of Anatomy, The University of Auckland investigate: The kiwi, unlike most other birds – including other ratites such as ostriches and emus – has adapted to a nocturnal niche. The acquisition of this nocturnal niche was accompanied by unique sensory specialisations and behaviours. The visual fields, eye, and visual areas in the brain of the kiwi are much reduced, suggesting they cannot rely on vision for information about their environment. In contrast, kiwi have a highly developed sense of smell and touch, evidenced by the massive enlargement of the brain areas dedicated to the processing of these types of signals. Olfaction and touch mediated by bill structures may be involved in foraging behaviour, and may be primarily used to locate and capture food in the wild. However, both modalities are short-range and provide little if any information about events taking place at a distance. In the absence of available long-distance visual cues, acoustic signalling is, therefore, the most likely way kiwi communicate. They have evolved a complex system of vocal communication, which appears to play an important role in reproductive activities, territoriality and pair bonding. Calls are complex in the frequency domain and show a high degree of individuality; individual kiwi can be told apart just by the structure of their call. Kiwi produce one prevalent vocalisation, termed the ‘whistle call’, which is thought to be a long-range call and is produced by both sexes. A pair can also occasionally be heard duetting with their mate. Calls also differ considerably between sexes. Call notes produced by males are tonal in nature, with clear frequency modulated harmonics compared to the less organized frequency structure of the female call (Figure 1). One puzzling feature is that both sexes produce calls that span a broad frequency range, (males: 1.5–13kHz; females: 0.1–7kHz) which is unlikely to be covered by their hearing range. To gain a better understanding of the role of each component of the call, we set out to determine the kiwi hearing range by examining the structure of the inner ear.
In contrast to the typical organization of the inner ear of mammals, hair cells in birds are not organized in discrete rows of outer and inner hair cells. Instead, there is a bed of hair cells that lines the entire basilar papilla. Frequencies are mapped along the basilar papilla in such a way that hair cells closer to the distal tip respond better to low frequency sounds, and hair cells closer to the base of the cochlea respond better to high frequency sound.
kiwi birds and hearing
Figure 2: Scanning electron microscopy of the basilar papilla of the North Island Brown Kiwi. A gradient of morphology of hair cells is seen that reflects this frequency mapping along the basilar papilla. Hair cells towards the distal low frequency end have longer stereovilli, whereas those towards the base have shorter stereovilli. The length and number of the stereovilli determines the stiffness of the bundle, and consequently, the preferred frequency to which each hair cell will respond. This property of the hair cells can therefore be used to predict the hearing range. We therefore measured these parameters in the kiwi basilar papilla and compared it with a large body of similar data on other birds to predict the kiwi frequency hearing range. The kiwi basilar papilla showed all the features of a typical avian papilla. The morphology of individual hair-bundles was more similar to that of small song birds than that of its closest relative, the emu. The gradient of tallest stereovilli height did not change linearly, like in most other birds, but instead showed a plateau where the height remained constant over the basal half of the papilla. This plateau or over-representation of the higher frequency range (sometimes called an auditory fovea) is only otherwise seen in barn owls, the auditory specialist by excellence.
Figure 1: Spectrograms of the male and female North Island Brown Kiwi calls.
Figure 3: Scanning electron microscope image of the steroevilli of a kiwi hair cell.
The ears of birds are organized in a slightly different way than those of mammals. Birds do not have an external pinna, so the sound is directed straight into the ear canal. Birds have a single middle ear ossicle (collumela) instead of the three typical of mammals. Within the inner ear or cochlea lies the basilar papilla, which contains the sensory hair cells that transform the acoustic signal into a neural code. The hair cells have a series of stereovilli or ‘hairs’ which vibrate in response to the auditory stimulus.
Figure 4: Predicted hearing range of the North Island Brown Kiwi in relation to the structure of the call.
For the kiwi, our estimates based on hair-bundle morphology predict a smaller than usual total hearing range of frequencies, with a relative overrepresentation of the highest frequencies (represented in the grey box – Figure 4). In absolute terms, the upper limit was estimated to be about 5kHz, the lower limit about 500Hz. The majority of the amplitude in the kiwi call falls within hearing range, but a large part of the call does fall outside of this range. Interestingly, from the structure for the call we don’t see any advantage to the overrepresentation of the higher frequencies. One possibility is that this higher frequency specialisation is used for hearing invertebrate calls or
movement related sounds similar to that seen in the barn owl. New Zealand has lots of very large invertebrates like the weta that could be very easily located by sound. Further work using behavioural and physiological studies are required to confirm this assertion. For further information contact: firstname.lastname@example.org Acknowledgement: The authors would like to acknowledge the Department of Conservation for access to kiwi specimens. We are also grateful to Christine Koppl, Martin Wild, Stuart Parsons, Len Gillman and the University of Auckland for their contribution and support of this study.
cochlear implants The adult Cochlear Implant Clinic for the Northern region of NZ is currently supporting just over 150 adults who have received a cochlear implant, write Bill Raymond (Audiologist) and Ellen Giles (Cochlear Implant rehabilitationist).
Permanent hearing loss originates from deterioration in the function of the tiny hair cells in the cochlea, situated in the inner ear. Hearing loss can stem from a number of causes, and can be present at birth or at any age. The average loss associated with aging sees these hairs disappear or deteriorate in their ability to work, usually more at the higher frequencies. Most of the time communication strategies and hearing aids are enough to combat this hearing deficit. Conventional hearing aids simply work by amplifying sound (detected by the microphone in the aid), and are programmed to the person’s hearing loss so that their output is dependent on the amount of hearing loss the hearing-impaired person has at each sound frequency (or pitch). Hearing aids have come a long way in efficacy and cosmetic appearance since they were first developed. Unfortunately, the usefulness of a hearing aid in facilitating communication is dependent on the amount of leftover working hair cells a person has; i.e. the amount of ‘residual hearing.’ For a person to benefit from the amplified signal there has to be enough residual hearing for the auditory cortex in the brain to understand and make use of the information being sent to it. Occasionally the amount of residual hearing is so small that the person is unable to benefit significantly from conventional hearing aids. That is when a cochlear implant can come into consideration. A cochlear implant is a device which helps severely and profoundly deafened adults to hear and communicate. A cochlear implant (CI) works completely differently to a hearing aid. Both hearing systems have a visible external component (for cochlear implants this is called the processor) and both external components have microphones that capture sound. A cochlear implant however, is also comprised of an additional, surgically implanted internal component.This internal component is placed under the skin behind the ear and has an electrode array that is placed in the cochlea. Instead of sound travelling through the hearing system in the conventional fashion, it is instead analysed in the processor, transmitted through the skin to the internal implant by means of FM waves and then changed to electrical signals which
stimulate the auditory nerve directly − producing sound sensations. The auditory nerve is ‘tonotopic,’ that is to say it is frequency specific and different sections of the auditory nerve correspond to different frequencies. When different electrodes are stimulated, this gives the CI-recipient some pitch perception. The sound of the CI is completely foreign at first and gradual adjustment and improvement over months and years is usually seen. A common complaint of new CI-users is that sound can be very ‘tinny’ or highpitched. This is because their hearing deficit has usually been longer and greater in the higher frequencies, and therefore the auditory nerve and the brain are not as used to hearing these higher-pitched sounds. Following this period of adjustment, virtually everyone does better with their CI than they were doing previously. The exact level of achievement is variable from one person to the next however. Some may find the CI-device good as an aid to lip-reading, while others are able to use the telephone (with practice!) and enjoy music. Background noise and unclear speakers will always present problems for the CI-user.
How a cochlear implant works
Reproduced courtesy of Cochlear Corp Aus. 1. The sound processor (A) captures sound and converts it into digital code. 2. The sound processor transmits the digitially coded sound through the coil (B) to the implant (C) just under the skin. 3. The implant converts the digitally coded sound to electrical signals and sends them along the electrode array, which is positioned in the cochlea. 4. The implant’s electrodes stimulate the cochlea’s hearing nerve fibres, which relay the sound signals to the brain to produce hearing sensations.
The Tinnitus research group at The University of Auckland has a primary focus on finding ways to defeat this insidious ‘internal noise’, as Grant D Searchfield, Head of Audiology, The University of Auckland explains: Tinnitus is the perception of a sound in the absence of a physical sound stimulus. This phantom perception of sound is believed to be experienced in a mild occasional form by the majority of the population, but for about 1in-50 tinnitus can have such bad effects that they can no longer lead a normal life. Tinnitus (derived from the Latin word for ringing) is perceived in different ways, it can resemble ‘ringing’, ‘hissing’, ‘buzzing’, ‘crickets’, ‘whistling’ or ‘humming’. The Tinnitus research group at The University of Auckland has a primary focus on finding ways to defeat this insidious “internal noise”. One of our greatest challenges is being able to obtain some objective indices of its presence, absence or change. Questionnaires are useful but lack the real-time resolution that researchers require. Because tinnitus is the perception of a sound but not a ‘real’ sound it is very difficult to capture what the person experiences and how that might change with time. To attempt to quantify tinnitus we use behavioural perception tasks, electrophysiological measures of spontaneous activity and complex cognitive electrophysiology methods. Some of the methods are described here.
Traditional tinnitus matching Tones of varying frequencies are played using an audiometer and the listener must respond to whichever sound most closely resembles their tinnitus. Although the tinnitus might be considered a broadband sound (such as ‘hissing’ or ‘buzzing’), lacking tonal characteristics, the majority of tinnitus sufferers are able to match their tinnitus to an external tone usually in a region of hearing loss. Tinnitus, however, does not behave in a manner characteristic of external tones, with frequency matches varying within a session by as much as three octaves. In some instances, tinnitus sufferers appear to confuse tests of pitch with loudness, and pitch matching may provide little information about the tinnitus sensation or its unpleasantness. Consequently, pitch matching is limited in its usefulness.
searching for phantom sounds
Computer-based spectral matching A number of computer-based techniques have been developed to improve pitch matching. Testing by Flora Kay, one of our students, has shown the benefits of a ‘likeness-rating’ method developed by Drs Larry Roberts and Dan Bosnyak of McMaster Universty in Canada. She used the traditional pitch match method and compared it to the Canadian Tinnitus Tester and an Environmental Matching Method. The environmental pitch match method utilised prerecorded sounds associated with the most common descriptions of tinnitus. The Tinnitus Tester software required participants to match the loudness of 11 ‘tonal’, ‘ringing’ or ‘hissing’ sounds (depending on what was selected initially), each with increasing centre frequency, to the loudness of their tinnitus. These participants were then replayed each of the 11 sounds at the loudness selected and asked to rate them based on its likeness to the tinnitus. The software was then able to generate a spectrum of frequency components contributing to the tinnitus sensation. Each of the three matching methods was then rated based on its similarity to the tinnitus perception on a 10 point rating scale annotated 0 = ‘not at all’, 5 = ‘somewhat similar’ and 10 = ‘identical’. The Tinnitus Tester sound was selected as producing the ‘most similar’ result compared to the tinnitus perceived (68%). The psychoacoustic pitch match method was the ‘least similar’.
Figure 2: Clockwise from top, screen shot of tinnitus likeness software, typical audiogram for a person with tinnitus showing high frequency hearing loss, tinnitus likeness spectrum for the same person showing its inverse relationship to the audiogram, bar graph showing percentage of persons rating resemblance of tonal matching with match to environmental sounds and tinnitus spectrum.
Where is tinnitus heard?
Figure 1: Audiology researcher and PhD student Michael Sanders using an audiometer to obtain a pitch match. As tinnitus matches occur to quiet external tones, testing takes place in sound treated rooms.
Understanding how tinnitus is perceived in space should also assist us in determining the auditory-cognitive processes involved in construction of neural activity into sound. Hannah Cameron undertook a Masters Dissertation examining where tinnitus is localised in auditory space. To obtain a location match a comparison tone was presented so that it was perceived as moving in a circle around the participant’s head, starting at the front and moving around the head in a clockwise direction in
the horizontal plane at ear level. Each participant first listened to this move around to ensure they could hear the tone adequately for the full circle. Adjustments were made to the intensity if necessary. Next, each participant was asked to listen to the tone move around their head and indicate at which stage the tone was closest to the perceived location of their tinnitus. This was repeated as necessary to obtain a confident location match. Once an exact location match was obtained in the horizontal plane, the sound was then moved from that location upwards in the vertical plane, from ear level to the top of the head. Again the participant indicated which spot was the best representation of where their tinnitus was coming from. Identifying the best match in the vertical plane proved to be slightly more difficult, therefore more repetitions were necessary. Once a 3D location match was identified, the location was recorded in terms of x, y, and z coordinates and also degrees in the horizontal plane from 0 to 360˚, with 0˚ representing the right ear, 180˚ representing the left ear, and 90˚ being directly in front. Degrees from zero in the vertical plane were arranged with 0˚ being ear level and 90˚ representing the top of the head. The accuracy of the 3D location match was assessed by comparing the pitch match tone played at the 3D location with binaural and monaural presentation. The 3D location was the best location match for the majority of participants.
Figure 4: An example ESA spectrum showing development of a 200Hz peak following ear injury.
Cognitive processes of tinnitus perception Although changes in spontaneous nerve activity may be a trigger for tinnitus, it is widely believed that the higher auditory pathways, including the auditory cortex, must be involved in tinnitus perception to account for the severity of its effects and resemblance to ‘normal sounds’. One simple model of tinnitus sees it as a hearing analogue of phantom limb pain. In phantom limb pain the absence of a limb creates a cortical reorganization in which brain areas represent the absent limb functionally shrink to be taken over by neighbouring sensory areas in the brain. The over-representation becomes sensed as pain. In the auditory system, damage to the ear creates ‘holes’ in activity reaching the cortex leading to reorganization. The auditory cortex maps sound in a tonotopic map – damage to the inner ear leads to functional changes in this map which includes reduced and expanded representation of different pitches of sound. The increased representation also appears linked to increased synchronization. The synchronization may ultimately be the objective index of tinnitus.
Figure 3: Three-dimensional representation of tinnitus localisation superimposed on a representation of a head. Black (sounds perceived as being in the right ear or side of head); dark grey (left side); light grey (centre).
Spontaneous nerve activity
Tinnitus matching techniques appear useful but they still rely on a subjective response. Tinnitus normally occurs following ear injury, so it has been hypothesized that changes in the spontaneous activity of the 8th (vestibulocochlear) cranial nerve should indicate tinnitus. Measuring this activity would normally be invasive requiring an electrode in individual nerve fibres. However, an electrode placed on the round window membrane of the cochlear, which is accessible through the eardrum, can remotely pick up the grouped spontaneous activity of many nerve fibres – this Ensemble Spontaneous Activity (ESA) has primarily been used to test physiological models of tinnitus and is yet to be used routinely clinically. Changes in the ESA (shown as a spectrum of electrical activity) following hearing loss include an overall reduction in spontaneous activity and development of increased electrical activity at 200Hz. The 200Hz activity may be an index of tinnitus from synchronised neural discharges. Further work using array processing (enabling a collection of electrodes to optimally detect and isolate the signals coming from a particular point in space) and Blind Source Separation (signals to be extracted from signal mixtures) might allow for electrodes in the ear canal rather than on the cochlear itself.
Figure 5: Diagram illustrating the concept of tonotopicity in the auditory system. Pictures on the left represent the normal auditory system; those on the right represent what occurs following injury to the cochlea (such as from noise damage). The cochlea codes frequency in a place manner that can be likened to the orderly organization of a piano keyboard. High frequencies are coded at the base, low frequencies at the apex of the cochlea. This functional organization is maintained up to and including the auditory cortex. With injury (right) the cochlea becomes less sensitive and more broadly tuned in the region of damage. The tonotopic representation in the central auditory pathways reacts to the change in input, such that it reorganizes so that frequencies neighbouring the area of reduced input ‘take over’. It is hypothesized that expansion of maps and synchronization that results lead to tinnitus. This is illustrated here as a keyboard missing keys to be replaced by a single larger key representing a single note instead of several notes. We have explored a more complex model of tinnitus which incorporates reorganization processes by using brain mapping of auditory evoked potentials. Michael Persky and Robert Mocharla, two visiting medical
students from the New York Medical School participated in research in which cortical potentials were measured in response to sounds differing in pitch and location. The amplitude of these recordings varied in persons with mild and severe tinnitus suggesting that tinnitus affects process underlying Auditory Scene Analysis (ASA). ASA is how the brain accomplishes separation of sounds of interest from background noise. When ASA rules are applied to tinnitus they seem to fail. In the normal process of ASA we match a sound to an object (we hear a rhythmical clatter sound as a train on its tracks); tinnitus has no such external object to couple to. It would seem that persons with tinnitus allocate ASA resources to trying to understand tinnitus perception – this means fewer cognitive resources are available for normal sound perception. Tests of auditory attention appear to corroborate such an idea.
sound Figure 7: Source analysis of AEPs (Auditory Evoked Potentials) in response to stimuli from different locations. The top recording is from a person without tinnitus, the bottom recording is from a person with tinnitus. The bright areas indicate the modelled source of largest amplitudes of electrical activity recorded from the scalp. In these cases activity was stronger in the person without tinnitus.
Figure 6: Dr Grant Seachfield and PhD student Kim Wise prepare a participant for recording of Cortical potentials. The participant wears a cap to which 64 recording electrodes are attached.
Figure 1: Dark green – natural stable; Light green – natural unstable; Orange – artificial unstable; Yellow – discovered at GSI, unstable; White – not yet confirmed.
continued from Chemistry page 44
Tinnitus is real. But tinnitus is a phantom sound, for although it can be heard it is not physically present as a sound in the environment. Searching for this phantom sound has been likened by some to the efforts of ghost hunters, trying in vain to obtain objective proof of apparitions. However, we believe that advances in electrophysiological and behavioural methods offer the promise of objectifying tinnitus. In doing so we should be able to solve the riddles of its perception and provide means to eliminate it. For further information contact: email@example.com
Further information and the Periodic Table below visit: http://www.gsi.de/portrait/ heavyelemets_e.html
Babbling brooks What causes the sounds of running water? Dr Alan Walton of The Cavendish Laboratory, Cambridge (and a former Visiting Erskine Fellow at the University of Canterbury) explains: We are all aware of the murmur of the brook, the roar of the waterfall and the crashing of waves on the beach. When pressed to explain the origin of these distinctive sounds most physicists suggest that they originate from the turbulence of the water. Louis Bloomfield (2001), for example, states in his widely-used textbook How Things Work that sounds ‘are created when water swirls about erratically, a behaviour known as turbulent flow.’ This is wrong! As far back as 1894, the British engineer Osborne Reynolds – who pioneered the systematic study of turbulence in fluids – demonstrated that water can flow turbulently but silently through pipes. Be observant and you will soon discover highly-turbulent stretches of river that are totally silent (Figure 1).
Minnaert oscillations In 1933 the Dutch astrophysicist Marcel Minnaert offered a quite different explanation (one that harked back to Reynold’s original experimental observations, of which he was seemingly unaware). He pointed out that that an air-filled bubble in a liquid should undergo harmonic volume oscillations; reduce the radius of the bubble from its ambient value and the gas pressure will increase, thereby providing an outwardly-directed force; increase the radius and the air pressure in the bubble will be less than the hydrostatic pressure in the liquid, thereby providing an inwardly-directed force. In other words, the bubble should undergo breathing mode (monopolar) oscillations with the bubble surface acting like a (spherical) loudspeaker cone. In simple terms, the gas in the bubble is the ‘spring’ with the oscillating mass being provided by the inertia of the surrounding liquid. Using only first-year undergraduate level physics, Minnaert (1933) proved that for low-amplitude oscillations the linear frequency f of the emitted sound is given by (don’t worry we will soon simplify the equation):
Figure 1: Turbulent water flowing between two boulders (the flow direction is down the page). Despite the obvious turbulence, this section of river is silent. In the course of his investigations, Reynolds introduced a local constriction into a pipe thereby lowering the hydrostatic pressure by virtue of the Bernoulli Effect. On increasing the rate of flow of water through the pipe he found that ‘a distinct sharp hiss is heard – exactly resembling that of the kettle or the hiss of water through a tap.’ He also observed – crucially – that the production of the hiss always coincided with the appearance of ‘minute bubbles’ in the vicinity of the neck in the tube. Reynolds never studied sound emission from unrestrained streams of water. Over the next forty or so years physicists assumed that the sounds produced by freely running water originate at the water’s surface. Sir Richard Paget (unpublished) suggested that the free water surface might partially close over to create Helmholtz oscillators (though I have yet to see any substantial evidence of this; see Figure 1), while Sir Lawrence Bragg (1920) suggested that the murmuring noise of brooks is produced when entrapped air bubbles rise to the surface and burst. In fact, bursting bubbles produce very little in the way of sound; try listening to the sound coming from the head of froth on a fizzy drink.
3 g p0 r
where r is mean (ambient) bubble radius, g is the familiar heat-capacity ratio Cp / Cv of the enclosed gas (Minnaert assumed the oscillations are adiabatic), p0 is the (mean) hydrostatic pressure in the liquid and r is the liquid density (assumed to be incompressible). For the case of an air-filled bubble in water at normal atmospheric pressure (100 kPa), equation (1) reduces to the easily-remembered form: fr = 3 Hz m. (2) We can therefore expect bubbles with radii in the millimetre range to emit sound with frequencies in the kilohertz range. That is the only equation we will need!
Laboratory studies Minnaert confirmed the validity of Equation (1) by using a capillary tube to blow bubbles in a pail of water. With one ear pressed against the pail, he introduced a bubble while he listened to a tuning fork with his other ear (he claimed he could estimate frequencies to a fifth of a tone). The volume of each individual bubble (and hence its radius) was found by collecting it under an inverted funnel connected to a capillary tube (both of which were initially filled with water) and measuring the length of the bubble trapped in the capillary tube of known crosssectional area. Nowadays, there is a simpler way to check out Minnaert’s equation (though I still hanker after using tuning forks). A gas-filled hypodermic syringe and a selection of needles will produce a range of different-diameter bubbles. Instead of using tuning forks to estimate frequencies, a hydrophone connected – via a suitable preamplifier – to an oscilloscope will immediately give the sound’s period. Thanks to the mass market generated by ‘dolphin listeners’ and other hobbyists, inexpensive piezoelectric hydrophones are readily available. I have used the
a range of different radii bubbles in the river. It’s over to you to deduce the radii of the bubbles responsible for this trace! Frequently there is such a wide range of bubble sizes present that the resulting sound may be loosely characterised as ‘white noise.’
Amplified hydrophone signal/V
model H1a Aquarian Audio hydrophone (useful frequency range 1 Hz to 100 kHz) sold directly by the manufacturers, Aquarian Audio Products (www. AquarianAudio.com) and currently priced at US$129. The low frequency cutoff of the hydrophone will – in practice – be determined by the input impedance of the preamp; a preamp with an input impedance of 300 kohm will typically be 3 dB down at 20 Hz when used with such a hydrophone, while a preamp with 1 Mohm input impedance will be 3 dB down at 6 Hz. Battery-powered versions – indispensable for field work – are difficult to source; FEL Communications (www.felmicamps.co.uk) can supply such a preamp. I have been using their 1 Mohm input impedance version. A computer-based oscilloscope (such as the Picoscope 2203) connected to a laptop is particularly useful for outdoor measurements. Students provided with this equipment usually have little problem in recording the sound emitted from a single air bubble introduced into the tank. Figure 2 shows a typical oscilloscope trace, recording the time dependence of the amplified hydrophone output voltage from a bubble of radius 1.4 mm. Apart from confirming that the data is in agreement with Equation (2), students should be able to demonstrate that the amplitude of the oscillations decays in an exponential manner (university-level students will be able to use the data to deduce the quality factor Q of the oscillating bubble). Minnaert oscillations provide yet another example of the ubiquitous unforced damped oscillator.
1.2 0.8 0.4 0.0 -0.4 -0.8 -1.2 -1.6 -2.0 0.0
Figure 3: The amplified hydrophone output from a hydrophone located in a babbling brook. In this case the signal was recorded on an analogue professional cassette recorder (Sony, Model WM-D6C) and viewed on an oscilloscope back in the lab. The time intervals on the abscissa are 1 ms apart.
Figure 2: The amplified hydrophone output showing the nature of the sound emitted by an air-filled bubble of radius 1.4mm undergoing volume (‘breathing’) oscillations in water at normal atmospheric pressure. The time intervals on the abscissa are 1 ms apart.
Because of the large mismatch in acoustic impedance between air and water, less than one percent of the sound energy produced by an underwater bubble will escape from the surface of a river. In many cases – particularly in wide open countryside – the frequency spectrum of the sound as measured in the air is essentially the same as the frequency spectrum measured underwater. The easiest way to confirm this is to simultaneously feed the outputs from the hydrophone and a microphone (located above the river) into a twochannel oscilloscope; with the preamp gains suitably adjusted both signals will be nearly identical. However, this is not always the case. Walk along a narrow deep gorge containing a bubbly river and you may well be overwhelmed by the sound, so much so that it can be impossible to carry on a conversation. The sound often has a booming bassheavy quality. An obvious explanation is that certain frequencies present in the white noise produced by the river excite standing waves in the gorge; it is as if we are standing inside an organ pipe. I have heard similar booming sounds when water tumbles into pools in caves.
Amplified hydrophone signal/V
0.6 0.4 0.2 0.0 -0.2 -0.4 -0.6 -0.8 -1.0 0.0
Despite his lab demonstration that newly-created bubbles in liquids are effective sound radiators, Minnaert never actually proved that the bubbles in a babbling brook are behaving in the same way. It is certainly true that a turbulent stretch of river (such as that in Figure 1) will be completely silent provided that there are no bubbles present and that the river will babble if any bubbles are being entrapped in the water. Yet, these facts – however suggestive – are not sufficient to prove that Minnaert oscillations are the source of the sound. To clinch it we must take our equipment outdoors and place the hydrophone in a babbling brook. When we do this we discover (Figure 3) that the underwater sound is indeed made up of a sequence of Minnaertlike oscillations of different frequencies, attributable to
To take stock, hydrophone studies have established beyond reasonable doubt that virtually all of the sound emitted by running water comes from Minnaert oscillations, though the spectrum of the sound heard in the air may be modified by resonances set up in the surrounding environment. But how do the bubbles get entrained in the water? Perhaps the most common mechanism is also the most familiar one. Pour water vertically from a jug into a partially-filled glass and you will see a stream of bubbles surrounding the incoming water stream. You will also hear the sounds produced by those bubbles. (Pour the water carefully down the inside wall of an inclined glass and no bubbles will be entrained; it will be silent.) This process is most commonly seen at work in streams flowing down a hillside when water flows over a stone
to land in a pool a short distance below. An example of this behaviour is shown in Figure 4; the swarm of newlycreated bubbles is clearly visible below the surface of the water. (Some foaming is also evident at the surface but, as already remarked, this foam produces very little sound on bursting.)
range being determined by the drop size). When no ‘plop’ is heard no bubble is produced. A hydrophone placed in the tank (or a microphone in the air) will confirm that the ‘plops’ arise from Minnaert oscillations. To study the process in greater detail I suggest you photograph it with a camcorder and then play it back frame by frame. Include the oscilloscope in the field of view—I put it directly behind the tank—so you can correlate the sound with the image. Figure 6 shows a frame from such a video. Perform the experiment and you should discover that the bubble is formed by ‘necking off’ the bottom of the crater created by the incoming drop (because the process happens so fast it may occur between video frames and hence be invisible) and that a Minnaert oscillation occurs at the instant the bubble is created.
Figure 4: Water entering the pool pulls in air with it, producing the bubble field visible below the water surface. In fast-flowing rivers containing submerged obstacles such as boulders, the water may be forced upwards leading to the creation of what looks like waves (Figure 5). These ‘waves’ do not move along the river; they merely fold over, entrapping air as they do so (only in that respect they are like waves breaking at sea). The river shown in Figure 5 is flowing from right to left, so the ‘waves’ are rolling over in the opposite sense to what we would expect from observing waves breaking on a beach. If there are no submerged obstacles such rivers may be bubble-free and will be totally silent as a consequence.
Figure 5: When a fast-flowing river passes over submerged obstacles such as boulders, the water may be forced upwards producing ‘waves’ that roll over and entrap air. These ‘waves’ do not travel. Here the water is flowing from right to left.
Another significant mechanism of bubble entrainment is that occurring when liquid drops land on water. We are all familiar with the irritating ‘plop’ produced by a dripping tap as a drop land on the water below. To observe this process fill a dropper (of the type used for putting drops in your eye) with water and allow a drop to land in a transparent tank of water, varying the height of fall of the drop until the ‘plopping’ is heard. Look through the side of the tank and you will see that a small bubble is formed a few millimetres below the surface every time a ‘plop’ is heard. In fact the ‘plop’ is only produced when a drop falls through a restricted range of heights (the
Figure 6: A still image from a camcorder. The camcorder is viewing an oscilloscope screen through a rectangular tank of water (a transparent storage box from a hardware store); the oscilloscope records the amplified signal produced by the hydrophone (the black cylinder). A few frames earlier a liquid drop landed on the water surface, creating a crater which necked off to form the bubble. The crater is just visible above the bubble. The water surface is approximately level with the top of the oscilloscope. A freely-moving column of liquid may break up into a sequence of drops (the phenomenon is known as the Rayleigh instability). This process is frequently observed in high waterfalls; by the time the water reaches the pool at the base of the fall it is largely composed of individual drops (Figure 7), many of which will produce ‘plops’ in the pool. A similar process may occur when water is launched horizontally as, for example, in white water rapids. Water drops can also be produced when strong winds whip up the surface of rivers or seas (the resulting ‘plops’ can account for a significant fraction of the noise in the ocean). Needless to say, rain drops (or hailstones) may produce their own ‘plops.’ The initial impulse that sets a bubble oscillating is probably given as the bubble is being formed. In the case of a drop-induced bubble it is almost certainly provided during the necking off process when the bubble changes from a pear shape to a spherical shape. Once the initial oscillation has died away a bubble will become inactive unless stimulated afresh. This, of course, applies to all bubbles no matter how they are entrained. Further impulses may be provided in the natural world as bubbles are carried downstream through regions of differing hydrostatic pressure (readily generated by the Bernoulli Effect). Doubtless there are other entrainment mechanisms at work; all I have done here is list a few that have caught my eye and ear. The important point is that no matter
Concluding comment In my view, much current physics’ teaching fails to encourage students to see physics in action in the natural world. The study described here allows students to get out of the classroom, to make observations of a natural process about which comparatively little is known, and to analyse their data in terms of familiar concepts (sound frequencies and bubble radii). Even without all of the equipment described here, the students can make substantial progress by using a microphone plugged into a portable cassette recorder and – on returning to the lab – connecting the recorder output to the Y-input of an oscilloscope as they simultaneously listen to the cassette on headphones. Without any laboratory equipment they can still learn a lot by simply using “ears that hear and eyes that see” as they frequent the world of brooks and waterfalls and waves. After all, that is what the psalmist did. For further information contact: firstname.lastname@example.org
science teacher 121
how the bubbles are formed under water they will be given an impulsive kick in the process, causing them to oscillate and to radiate sound. The observation that roaring rivers are also bubbly liquids is not a recent one – it’s in the Bible (Psalms, Chapter 46, verse 3).
Figure 7: Water falling from a great height (20m here) may break up into discrete drops, each one of which will independently strike the pool at the bottom of the fall.
Bloomfield, L. A. (2001). How Things Work, 2nd edition (p. 153). Wiley: New York. Bragg, W. H. (1920). The World of Sound (pp 69-74, 129-130). Bell (London) reprinted (1968) by Dover Publications (New York). The unpublished work of Richard Paget is described on p 71. Minnaert, M. (1933). On musical air-bubbles and the sounds of running water. Phil. Mag., 15, 235-248. Reynolds, O. (1894). Experiments showing the boiling of water in an open tube at ordinary temperatures. Paper read before Sect. A, Brit. Assoc. at Oxford. Reprinted in Reynolds, O. Papers on Mechanical and Physical Subjects.Cambridge UP (1901), vol. II, pp 578-587.
Researchers at Nutrigenomics NZ have found that curcumin, the major yellow constituent of turmeric spice, reduces inflammation in model systems of Crohn’s Disease. This discovery may assist in the development of diet-based treatments for people suffering from the equivalent genetic form of the disease. The research also demonstrated that rutin, a component of buckwheat seeds, citrus fruits and tea, also known to relieve symptoms in some Crohn’s Disease sufferers, did not have any effect in models of the same genetic disorder. The results of the study are published in the British Journal of Nutrition. “Crohn’s Disease, a form of inflammatory bowel, can be aggravated or relieved by the sufferer’s diet,” says Christine Butts of Plant & Food Research. “However, due to the number of genes involved, different people with different disease genotypes can be affected by different foods, so there isn’t a ‘one size fits all’ solution. Only by systematically linking particular components to effects on the specific genotype can we get a true understanding of the disease and how to treat it.”
“This finding means that some people with Crohn’s Disease may benefit from eating turmeric, but this is entirely dependent on their genetic makeup. Others may not get any benefit, or may even have a severe reaction. However, we are one step closer to understanding this disease and how to best control it with diet.” “In diseases with complex genetics, such as Crohn’s Disease, understanding which genetic variants are affected by which food compounds is important in knowing what to avoid in the diet,” says Kieran Elborough, acting General Manager, Food Innovation at Plant & Food Research. “Using this knowledge, we can develop dietary supplements or foods with added benefits which can help disease sufferers based on their personal genotype.” Nutrigenomics New Zealand, a collaboration between Plant & Food Research, AgResearch and The University of Auckland, is funded by the Foundation for Research Science and Technology. The primary aim is to develop gene-specific foods targeted to preventing, improving and curing diseases.
turmeric and irritable bowel disease
tuning into volcanic vibrations Seismic and acoustic vibrations produced by volcanoes can provide vital information as Dr Gill Jolly, Volcanology Section Manager, GNS Science, Wairakei Research Centre explains: Introduction Most people are aware that volcanic activity is intimately associated with earthquakes on or near the volcanic edifice (see Figure 1). Earthquakes in a volcanic environment can be related to subterranean magma movement or they can be caused by regional tectonic activity. Less well known is that erupting volcanoes also produce sound waves that can provide clues about explosions.
Figure 1: Ash clouds at Ruapehu in 1996. This eruptive period was the most significant activity at Ruapehu in the last 50 years, although there have been several smaller events that have affected the ski fields. Image courtesy of GNS Science.
It is fairly obvious that volcanoes make noises – any movie with a volcanic eruption will include an ominous rumbling soundtrack and explosions will be marked by loud detonations. However, much of the sound energy from a volcano occurs in a frequency range that is not audible to the human ear. This is termed ‘infrasound’. The audible range for sound vibrations is normally between frequencies of 20 and 20,000Hz. Infrasound is vibrations at less than 20Hz. Both seismic and acoustic signals are vibrations: either in the earth or in the air. In this article, the causes of these vibrations on volcanoes, how they are measured, and some uses for these data on volcanoes will be discussed.
Volcanic seismicity Earthquakes are happening in New Zealand all the time. About 15,000 earthquakes are recorded every year across the country, of which about 250 are felt by the population. Some of these earthquakes are associated with volcanoes, particularly in the central North Island. Recording and interpreting seismic signals under a volcano is akin to listening to the heartbeat of a volcano – magma can force cracks to open in the surrounding rock and its movement can cause resonance which generates vibrations. These vibrations can be recorded by a network of seismometers around the volcano. The seismicity can be used in a number of ways to learn about the internal structure of the volcano, to discover how magma moves inside a volcano, and sometimes to forecast future eruptions. Volcano-seismic signals can
also be used to measure the size of explosive eruptions and to model how an explosion occurs.
Sound waves from volcanoes Sound is generated by a change in ambient air pressure. In a volcano, these changes are generated by the expulsion of highly pressurised gases from a vent, and the result is a pressure wave being generated from the source radiating away from the volcano at the speed of sound (ca. 330ms-1). Commonly, the changes in pressure generated by a volcanic eruption are ca. 10–100Pa, so changes of 0.01 to 0.1% relative to atmospheric pressure (ca. 105Pa). Although this seems small, it can be measured by sensitive microphones at considerable distances from a volcano. The most spectacular manifestation of volcanic sound is from a large explosion. Indeed, one of the loudest noises ever recorded was the eruption of Krakatau volcano in Indonesia in 1883. This explosion was even heard as far away as Perth, WA, some 3000km away and on Rodriguez Island in the Indian Ocean over 4500km from Krakatau. One reason that sound can travel so far is that it can be reflected repeatedly from the Earth’s surface and back off the upper atmosphere a bit like a lens focusing the sound waves. Much volcanic infrasound is less spectacular and can be attributed to periodic venting or ‘puffing’ as gas is released from the vent in a less explosive manner. Even though less impressive, this type of activity can tell us about how gases are released from volcanoes in a passive manner.
Measuring vibrations from volcanoes Volcanoes send vibrations through the earth as seismic energy, through the atmosphere as air pressure changes and also through oceans as water sound (hydroacoustic) waves. Seismologists have a range of instruments to detect these different types of vibrations. The simplest tools are high sensitivity microphones that can detect low frequencies (also known as microbarographs). These can be installed either on the flanks of a volcano or at some distance away and measure small changes in air pressure over time. One problem with microbarographs is that they can appear to be very noisy since they detect unwanted pressure changes such as gusty wind. One way to filter out the unwanted noise is to install several microbarographs together and add their respective signals together, so that ‘real’ signals can be increased above the noise level. A cheaper method of increasing the signal level above noise or if you don’t have space to install a large number of microbarographs is to fix a long pipe to the microbarograph. If evenly spaced holes are pierced into the side of the pipe, wind noise (which would enter the different holes at random times) will be reduced. To measure ground vibrations due to earthquakes, seismometers are installed around a volcano. In New Zealand, the national network of seismographs is built and maintained by GeoNet, a project run by GNS Science on behalf of the Earthquake Commission (www.geonet. org.nz). These sensors detect any displacement of the Earth’s crust due to regional tectonic stresses or due to magma movement. The data are then transmitted to data centres in Wellington and Taupo, and computer programs automatically try to calculate locations of
Hydrophone data can inform us about submarine eruptions that are not visible at the surface. These can be important since a high proportion of gas in the water column can lower the density of the sea, resulting in dangerous conditions for ships. Tsunami can also be generated by underwater volcanoes. The key to understanding the various geophysical signals emanating from volcanoes is to integrate different types of data. One volcano in New Zealand where this is increasingly being achieved is Mt Ruapehu in the central North Island.
science teacher 121
any earthquakes detected. If an earthquake is large enough to be felt by the public, a pager message is sent to a trained seismologist who reviews the location and informs the public via the GeoNet website and through email, fax and pager messages. Sound waves can also be transmitted through the water, and to measure these waves a different instrument is required: a hydrophone. These are essentially underwater microphones and can detect sound generated by submarine volcanoes or earthquakes many thousands of kilometres away. All these techniques are well developed around the world for volcano monitoring. However, a volcano observatory will endeavour to record many other different types of data (not just seismic and acoustic), so that small perturbations in a range of geochemical and geophysical parameters can be noted early in a period of volcanic unrest. Multidisciplinary volcano monitoring also includes other techniques such as using GPS to measure how the ground is moving over days to weeks to months, collecting and analysing gas and water samples from vents and fumaroles, and measuring atmospheric emissions of ash and gas. There is also a global seismic and infrasound network that has been installed to be able to detect nuclear testing in support of the Comprehensive Nuclear Test Ban Treaty (www.ctbto.org). If any country initiates nuclear bomb testing, the explosions would be picked up by both seismometers and infrasound sensors around the world, and subsequent diplomatic pressure can be imposed through the United Nations. The remainder of this article will focus on interpreting and using seismic and acoustic signals on volcanoes.
Monitoring Mt Ruapehu The GeoNet project has been responsible for monitoring the activity at Mt Ruapehu since 2001 (see Figure 2). Over the past eight years, the monitoring systems on Ruapehu have been gradually expanded in close collaboration with the Department of Conservation (DOC). DOC manages Tongariro National Park, and public safety is a major concern given the proximity of large numbers of people to the volcanoes. Monitoring Ruapehu (and Tongariro and Ngauruhoe) is especially important since there are ski fields on the flanks of the Ruapehu and many trampers use a range of tracks in the National Park. On a busy winter’s day there can be several thousand members of the public within 10km of the active vent on Ruapehu.
What do seismic and acoustic signals tell us? So how do we interpret the data that are being collected by these networks of instruments? First, we can simply observe the signals on the seismometers and microbarographs in real-time. The seismicity gives us an indication of how restless a volcano is. We can record, count, analyse and locate the different types of earthquakes related to the volcano and determine whether activity is escalating or moving closer to the surface. We can also look in detail at individual earthquakes and model what is causing the vibration – is it magma cracking open the rocks, or perhaps resonance in the pipe that magma is moving along (a bit like the resonance of an organ pipe), or are the signals simply a result of tectonic movements. Rock cracking tends to result in earthquakes that have high frequency vibrations (> ca. 10Hz) and very sharp onsets. Conversely, the movement of magma or geothermal fluids generates lower frequencies (1–5Hz). Interpretation needs to be done by a skilled scientist, however, since other factors can modify the seismic signal. For example, a distant earthquake will tend to have less high frequency signal and may have the appearance of a low frequency volcanic earthquake. Acoustic data can help us in several ways. If we have detected an explosion in the seismic record, we can look at the sound waves generated by the explosion and learn how the explosion was generated by analysing the duration and shape of the signal. Since acoustic waves travel over long distances with little modification, we can also interrogate the microphone data to detect eruptions from volcanoes in remote locations such as uninhabited islands in the south Pacific. If we have data from several recording sites, we can determine the location of the eruption and provide warnings to mariners or the aviation industry.
Figure 2: Microphones are installed at other sites (visit www.geonet.org.nz). Ruapehu is classed as a frequently active volcano, with small eruptions on average once every 2–3 years and moderate eruptions every 10 years or so. Larger eruptions, such as in 1995, have occurred about every 50 years. One of the key hazards on Ruapehu are lahars; even a moderate eruption can eject Crater Lake water onto the flanks of the volcano and generate rivers of rock, mud, snow, ice and water flowing down through the ski fields. Clearly, we need to understand what generates explosive eruptions on Ruapehu and whether we can provide warnings to skiers on the mountain.
The 2007 eruption of Mt Ruapehu When Ruapehu erupted in September 2007, the explosion signal was captured by both seismometers and microphones (see Figure 3). The eruption was relatively small compared to other historic eruptions at Ruapehu such as in 1995–6 or 1975, but the summit
area was still blanketed by ash and rock debris that was ejected from Crater Lake, and lahars were initiated to the north-west, east and south-east (see Figure 4). Fortunately the eruption occurred in the evening, and large numbers of the public were not on the slopes. However, one climber was seriously injured on the summit of the volcano, and a groomer driver narrowly escaped a lahar travelling through the ski field.
Figure 3: Seismic and acoustic signals from the 25 September 2007 eruption on Mt Ruapehu. The top panel shows the infiltered seismic signal. The middle panel shows only the very low frequency component of the seismicity – this shows that the actual ‘explosive’ part of the signal was at the very start of the eruption. The third panel shows the microphone signal – this arrives a little later than the seismic signal because the speed of the seismic waves in the ground is faster than the speed of sound in air. Image courtesy of GNS Science.
The EDS uses a combination of seismic and acoustic signals recorded at sites around the mountain to detect an explosion. It was first implemented as the Lahar Warning System after the 1969 and 1975 eruptions but was substantially upgraded after the 1995-6 eruptions. Volcanic earthquakes occur frequently on Ruapehu, but not all of the earthquakes also produce explosion eruptions. Volcanic earthquakes can be detected by a computer algorithm on the basis of the frequency content of the earthquake (volcanic earthquakes tend to have predominantly low frequencies between 1 and 10Hz). In order to determine whether the volcanic earthquake has resulted in an explosion at the surface, acoustic data from an array of microphones around the mountain are then used by the algorithm to decide whether a pressure wave in the air has been generated. If both of these conditions are met, the algorithm sends a message to a speaker system on the ski field to warn the general public and the staff (see Figure 5). Once an eruption has occurred, there is a time delay of only 90 seconds before a lahar may pass through part of the ski field and thus the warning has to be triggered within this period. Time is therefore very important and so the seismic and acoustic sensors are all located on the flanks of the volcano.
Figure 5: The concept of the Ruapehu Eruption Detection System. Seismic and acoustic signals are recorded by the network on the mountain, and then are fed into a computer system at Whakapapa which determines whether a volcanic explosion has occurred. A message is then sent to a speaker system which will broadcast a message to the ski field. Figure 4: Ruapehu deposits from the eruption on 25 September 2007. The northern part of the volcano summit (to the right of the photo) was covered in ash and boulders. Lahars were generated to the east, southeast and north-west. The clear lahar in the foreground flowed down the Whangaehu Glacier to the east. Image courtesy of GNS Science.
Subsequent analysis of seismic and acoustic signals from the eruption has showed that most of the four-minute seismic signal was not related to the explosion – the acoustic signal generated by the sudden expulsion of gas from Crater Lake lasted less than 60 seconds. The rest of the seismic signal was probably generated by movement of lahars away from the summit area, and resonance of the volcanic conduit after the ejection of material.
The Ruapehu Eruption Detection System
We now know that explosive eruptions at Ruapehu generate infrasound as well as seismic energy, but can we use this knowledge to warn mountain users that an eruption has occurred? This is the aim of the Eruption Detection System (EDS) on Ruapehu that is managed by DOC.
Image courtesy of GNS Science.
Future directions As our understanding of how Ruapehu works increases, improvements in the early detection of eruptions through seismic and acoustic data will continue. One type of seismic signal that might prove useful in discriminating between explosions and volcanic earthquakes that do not produce eruptions is called a Very Long Period (VLP) earthquake that have time periods of 2-30 seconds. Current research work is trying to characterise these VLPs and understand when and how they occur. Ultimately, they may also become part of a future EDS algorithm.
Conclusions Seismic and acoustic vibrations produced by volcanoes can provide information about (a) what is happening under the volcano and (b) what happens when material is ejected from the vent area. If we can listen to these signals and understand what they are telling us, then we can provide a timely warning that an explosion has occurred and may even allow us to forecast future eruptive activity. For further information contact: G.Jolly@gns.cri.nz
science teacher 121
“In space no one can hear you scream.” That was the tagline for the classic 1979 science fiction/ horror/thriller movie ALIEN. The movie was very successful and many people now recognise and quote the tagline… but is it true? Karen Pollard, Department of Physics and Astronomy, University of Canterbury, explains:
sounds in space In solids, sound waves can be transmitted as longitudinal or transverse waves (or both).
Figure 2: Graphic representations of a sound wave in air: (a) air at equilibrium, in the absence of a sound wave; (b) compressions and rarefactions that constitute a sound wave; (c) transverse representation of the wave, showing amplitude (A) and wavelength (λ). (Ref: Encyclopaedia Britannica).
Figure 1: Poster for the movie ALIEN – in space no one can hear you scream. If Ripley were to lean out of her mining spaceship, take off her helmet and scream… would you be able to hear her? The answer is… no. For the same reason that she would be dead if she took off her helmet… there is no air in space.
What is sound? Unlike light or other forms of electromagnetic radiation, sound is a mechanical wave that requires a medium through which to travel. That medium can be any form of matter: gas, solid, liquid or plasma. Matter in the medium is periodically displaced by a sound wave and thus oscillates. This oscillation, or disturbance, travels through the medium transporting energy from one location to another. Because sound waves require a medium through which to propagate, sound waves cannot travel through a vacuum. Space is close to a vacuum since there are generally very few atoms making up the interstellar medium of space. In air, sound is a series of variations in pressure that propagate as longitudinal or compression waves (see Figure 2). Sound waves are generally created by the vibration of some object. As the sound wave propagates, there are areas of compression and areas of rarefaction (Figure 2b). The waves are detected when they cause a detector (such as your eardrum) to vibrate, thus giving the sense of ‘hearing’. Sound has the standard characteristics of any wave form (Figure 2c).
Figure 3: The propagation of sound waves inside a star depends on the internal structure.
Figure 4: The interior structure of the Sun (and the stars) can be deduced from studying the inward (red) and outward (blue) motions of the sound waves in a star, similar to how seismologists use earthquakes to infer details of the Earth’s interior. Image from the GONG project.
What are the properties of sound waves?
What are the sounds of the stars?
Amplitude and wavelength (or frequency) are two important properties of the wave. For sound waves in air, amplitude relates to ‘loudness’, frequency corresponds to ‘pitch’, whilst the speed of sound in a medium is dependent on the properties of the medium and is largely independent of the amplitude and frequency of the wave. Properties of the medium that are important are elasticity, density, and temperature. The sound speed is highest for solids, lower for liquids, and lowest for gases. In gases the important properties of the medium are temperature, molecular structure of the gas and molecular weight. The sound speed is faster at higher temperature and for lower molecular weights. Table 1 shows the speed of sound for various gases at 0°C. The composition of the matter in the Universe is approximately 75% hydrogen and 25% helium. The properties of these two gases are therefore extremely important for common astronomical objects such as stars.
The Sun, or a star, can easily have sound waves moving through the ionised gas (plasma) in a somewhat similar way to seismic waves propagating through and around the Earth. In the case of the Earth, seismologists are able to use the initiation and propagation of earthquakes to deduce the detailed interior structure of our planet. In the science of Helioseismology (for the Sun) or Asteroseismology (for the stars), we apply the science of seismology to the stars, decoding their internal structure from studying the multiple tiny vibrations or ‘starquakes’ on the star’s surface (Figure 3). Many stars, including our own Sun, ‘ring’ like bells and show patterns on their surfaces like those on a drum that has been struck by a drumstick. In stars this ‘ringing’ can be observed by measuring the Doppler shift of electromagnetic radiation from the stellar surfaces (see Figure 4). Stars do not resonate with just one frequency or tone, but can have many modes excited simultaneously. The result is that each star has a unique musical ‘voice’ wholly dependent on the internal properties of that particular star. Using asteroseismology, we can learn about the stellar interior by doing a detailed comparison between observations and complex theoretical models.
Table 1. The speed of sound for various gases at 0° C. Gas Air Carbon Dioxide Oxygen Helium Hydrogen
Speed (m/s) 331 259 316 965 1290
So, are there any sounds in space? Yes and no. Sound waves can easily travel through astronomical objects, like the stars, the Sun, or even the Earth. Think of earthquakes or seismic waves propagating through the Earth, or radar/sonar waves travelling through water, or the perhaps more familiar sound waves travelling through the air. Sound waves can propagate through the gases that make up most astronomical objects in the Universe. We can see, or otherwise detect, the effects of sound waves propagating through astronomical objects like the Sun, the stars, or through gas in space. We do this by measuring the motions of the gas particles due to the compressions or shock waves propagating through the gases. However, we do not listen to sound waves travelling through space.
Sound in space What do astronomers mean when they talk about sounds in space? Can you listen to the Sun, a star, a pulsar, a black hole, a galaxy or a universe? Many astronomical objects (such as the Sun, stars, pulsars or black holes) vibrate or pulsate in various ways. Some of these vibrations are random events (such as a supernova explosion), whilst other oscillations are extremely periodic (such as the rotational frequency of a pulsar, or the pulsations of a Cepheid star). We can convert, or scale, the frequencies we detect in astronomical objects to frequencies that are readily detectable by the human ear. By scaling these frequencies to the audible range of humans, we can then listen to these ‘sounds of space’ even though the vibrations may not be sound waves that we are familiar with. The relationship between the frequencies (the ‘pitch’ or the ‘notes’) is preserved, so in a way, it is like scaling a piece of music up or down an octave (or 57 octaves in the case of a black hole!).
Is it (scientifically) useful to listen to the sounds of the stars? Yes! Actually it is scientifically useful because the interiors of stars are among the most difficult regions of the universe to observe. Stars are opaque to visible light and there are few other scientific techniques available to explore inside stars. (And we certainly don’t want to send Ripley and her mining ship out for this information!) Asteroseismological studies allow us to determine whether the composition of an object is changing (since the slow fusing of hydrogen to helium in its core will change the sound speed, thus changing the ‘pitch’ or ‘tone’ of the star’s voice). We are also able to detect the effect of binary companions on the stars through their gravitational influence (similar to the Moon’s effect of raising oceanic tides on the Earth). The ultimate goal of asteroseismology is to improve the evolutionary models of stars, so that we understand the details of how a star is born, lives and dies (and how its structure changes throughout its life). Stars are the very constituents of star clusters and galaxies, as well as the crucibles for all the elements in the Universe (besides hydrogen and helium), so improving our understanding of their internal structure and the evolutionary processes improves our understanding of the Universe as a whole.
What is ‘The Music of the Stars’? Recent technological advances have meant that astronomers now have instruments with the precision necessary to detect the tiny surface waves of stars, thus allowing us to probe deep within the opaque stellar interiors. One of these precise astronomical instruments is the HERCULES spectrograph, which is used on the 1-metre telescope at the University of Canterbury’s Mt John University Observatory in Tekapo. Using this instrument, we have just started a research project to study the sound waves in stars that are similar to the Sun. This research, supported by the Marsden Fund, allows New Zealand astronomers not just to listen to, but to fundamentally understand and to mathematically describe, these stellar sound waves – the ‘Music of the Stars’. For further information contact: email@example.com
Fisheries acoustics is a technique that can provide a wide spatial coverage of organisms in water as Gavin Macaulay, Richard O’Driscoll, and Stéphane Gauthier, National Institute of Water and Atmospheric Research Ltd, Wellington explain: Introduction Fisheries acoustics, or echo sounding, is a technique that uses underwater sound to study the behaviour, distribution, and abundance of aquatic organisms. Work in this field tends to focus on fish, but the technique is also applied to much smaller and larger organisms such as plankton and marine mammals. Underwater acoustics is based on transmitting sound waves into water and measuring what is reflected back. While simple in concept, it is a field with many small technical details – some of these will be explained in this article. We then present two examples of the use of fisheries acoustics in New Zealand.
What is an echo sounder? An echo sounder comprises a transducer, transmitter, and receiver. The transducer converts electrical energy into acoustic vibrations in water, much like a loudspeaker, while the transmitter produces the electrical energy that drives the transducer. The transducer is made up of many ceramic piezoelectric elements which expand and contract when an alternating voltage is applied. Most fisheries acoustics systems use the transmitting transducer to also receive the acoustic echoes – in that respect it works like a microphone. The receiver takes the electrical signals from the transducer, converts them to digital form, then passes the digital data onto a computer which displays and stores the data. Successive pulses of sound from the transducer (‘pings’) and the resulting echoes, are used to make up an echogram display – for example, see Figure 1 where the echoes from each ping are plotted vertically (different strength echoes have different colours), and successive pings are plotted from left to right. Echo sounders have varying physical forms that depend on their usage. For example, many transducers are mounted on the hulls of vessels, while some are towed behind a vessel to produce a more stable platform, or lowered to be closer to the fish of interest. Some can be mounted on the seafloor or on fixed platforms to collect data from a single location over extended periods. Echo sounder equipment needs careful calibration to yield repeatable results; performance varies with temperature, depth, and age of the equipment. We typically use small spheres made of tungsten carbide as targets of known reflectivity. In some cases this involves suspending the sphere some 20m below the hulls of large vessels – a tricky operation in the best of conditions. An important parameter of an echo sounder is its operating frequency – fisheries applications tend to use frequencies in the range of 12 to 200kHz. The audible spectrum in humans ranges from 20Hz to 20kHz. Lower frequencies travel further in water, but have a lower resolution and require larger and heavier transducers,
counting fish using acoustics
and more powerful transmitters. As the frequency increases, the sound is absorbed by water more rapidly, so the higher frequencies cannot see as far, but the equipment is lighter, more portable, and cheaper. Absorption of sound by seawater is caused by a transfer of thermal energy that is associated with chemical relaxation processes involving salts in the water, especially MgSO4. Accordingly, absorption in freshwater is much less than in seawater. In seawater, it is possible to see fish at distances of 2000m with an 18kHz signal, while at 200kHz the range reduces to about 300m. In freshwater a 200kHz signal can see over 500m.
How does sound reflect off fish? An important parameter that is needed to interpret acoustic data is target strength (TS). This is a measure of how well an organism, such as fish, reflects sound. The TS is usually expressed as the logarithm of the ratio of the reflected sound over the incident sound pressure: TS = 20*log10(pr/pi), where both pressures are adjusted to those that would be present at a range of 1m from the target. The reflected sound pressure is always less than the incident sound pressure, so the logarithm is always of a number less than 1, and hence TS values are always negative. Confusingly, TS values that are more negative indicate smaller reflections: for example, a large strongly reflecting fish can have a TS of -30dB, while a small krill can have a TS of -80dB. Sound is reflected from changes in acoustic impedance, which is the product of sound speed through the material and its density. For example, a bubble of air in water reflects strongly because the change in impedance between water (1500m/s * 1000kg/m3) and air (340m/s * 1.2kg/m3) is very high. Many fish have a swim bladder which contains gas used to regulate their buoyancy – this generates a strong acoustic echo. Some fish don’t have a swim bladder and acoustic echoes from them result from much smaller impedance changes, such as between water and fat (with an impedance given by 1000m/s * 750kg/m3). These differences in reflectivity between species can be used to distinguish them, but can also make some species very hard to see – something that has very low contrast with water, such a jellyfish, are difficult to detect. The TS varies in a complicated way with the size, shape, behaviour, tilt, and roll of the fish. Much work has been done in New Zealand and around the world to obtain useful estimates of fish TS under various conditions. The most common way to express the TS of a species of fish is as a relationship between the length and TS, where the TS is a mean weighted by the tilt angle distribution of fish in their natural habitat and state. The acoustic frequency also affects the target strength and this can be used to help identify the species. As a general guide, lower acoustic frequencies reflect well from large organisms and also from small fish with gas-filled swim bladders where the sound causes the swim bladder to resonate. Inversely, small organisms and those without a gas-filled swim bladder reflect poorly at low frequencies, but well at high frequencies – for examples, orange roughy (which does not have a gas-filled swim bladder) reflect poorly at 38kHz, but much better at 120kHz.
Figure 1: A mid-water school of spawning hoki at 180–300m depth in Nicholson Canyon, Cook Strait (stratum 3 in Figure 2).
What fish is that? While acoustics can indicate the presence of an organism, it doesn’t directly measure the species. However, knowing the species that caused an echo makes the acoustic data considerably more useful and a lot of effort goes into deducing the species. This is known as target identification. Techniques to achieve this include the use of nets to catch schools of fish and lowering of cameras into fish schools. With this additional information and other aspects such as the shape of the school, its behaviour, the school and water depth, location, time of day, time of year, etc, one can build up experience in deducing the species of schools without actually having to catch them. Any differences in echoes at different frequencies also help in identifying the species.
Turning echoes into useful information A single ping from an echo sounder provides information on the scattering objects in view of the transducer at that time. Typically, the transducer is moved over an area, pinging repeatedly to build up a picture of the distribution and density of scatterers. This is useful information – we can survey large areas of the ocean quite quickly and at ranges far beyond what can be achieved with other techniques. From these data we can map out the distribution of organisms, how it changes over time, and match it to other measurements. However, to obtain estimates of biomass requires more effort – the TS of the organisms is required, and a statistical survey design and analysis procedure needs to be followed.
Estimating hoki abundance
Acoustic surveys of spawning hoki have been conducted regularly since 1984, and are an important input into the stock assessment used to set catch limits. Hoki is New Zealand’s largest fishery with a total allowable catch (TAC) in 2008 of 90,000 tonnes. This is much lower than catches in the past (which peaked at 269,000 tonnes in 1998), and reflects the need to allow hoki stocks to rebuild (Ministry of Fisheries 2008). In winter, hoki undergo long-distance migrations from their feeding areas east and south of New Zealand, to spawning grounds off the west coast of the South Island and in Cook Strait. On the spawning grounds, hoki typically form large midwater schools (e.g. Figure 1). Their occurrence in single-species aggregations clear of the seabed allows for accurate estimation of hoki abundance using acoustics. To find fish using acoustics you can simply steam around the ocean until you see schools on the echo sounder. However, to estimate fish abundance it is necessary to follow a survey design that allows fish densities to be estimated and scaled up over an area. The first step in designing a survey is defining the area where fish are likely to occur. Spawning hoki in Cook
Figure 2: Map showing stratum boundaries and example transect locations (black lines) for an acoustic survey of Cook Strait. Colours show the underlying bathymetry. Transects run across submarine canyons at seabed depths greater than 200m. Strait congregate in submarine canyons and depressions where the seabed is deeper than 200m (Figure 2). The area is subdivided into a number of smaller areas called strata based on bathymetry and fish density in previous surveys. In Cook Strait there are six strata (Figure 2). Acoustic data are collected along a number of parallel lines called transects in each stratum. The transect positions are randomly assigned to each stratum prior to the survey. More transects are assigned to strata where densities of fish are expected to be higher. This design is known as a stratified random survey and is relatively simple to analyse (see Coombs & Cordue 1995). We first obtain estimates of average hoki density (number of fish per m2) along each transect by summing up (integrating) the amount of sound from hoki schools and dividing by the hoki target strength. Statistically, these transect density values provide the random samples used to estimate abundance. Transect values (samples) from each stratum are then used to estimate the mean and variance of hoki density in that stratum. The stratum density is scaled up by the stratum area to obtain the number of fish in that stratum. Estimates from the six strata are summed to obtain an estimate of the total number of hoki in Cook Strait. Similarly, variance estimates are combined to provide a measure of the uncertainty of the abundance estimate. A good survey will have relatively low uncertainty and so provide a precise estimate of abundance. Hoki have a long spawning season, from July to September. To ensure that we count most of the fish that enter Cook Strait, we carry out 6–10 separate surveys or snapshots spread over the spawning season. Abundance estimates from each snapshot are averaged to obtain an estimate of mean hoki abundance in Cook Strait from a given year. Over a number of years we obtain a time-series of abundance estimates for hoki that tell us whether the stock is increasing or declining. Recent surveys have shown encouraging signs that hoki stocks have begun to increase from a low point in about 2005.
Mesopelagic resources on the Chatham Rise
science teacher 121
The Chatham Rise is a highly productive area of New Zealand’s Exclusive Economic Zone and supports several important commercial fisheries, such as orange roughy, hoki, and oreo. Much of the food that these fish consume is in the form of smaller fish – for example, pearlside and lanternfish, which range in length from 20 to 250mm. At dusk many of these small fish rise from depths of about 200–500m to the surface to feed, and then descend back down at dawn. These are classified as mesopelagic fish. Acoustic surveys of mesopelagic fish can provide information on the abundance of food resources for the larger fish on the Chatham Rise, and help predict future levels of the commercial fish stocks. In May 2008, these fish were surveyed using a new five frequency echo sounder system (18, 38, 70, 120, and 200kHz). The five frequencies proved to be very useful in identifying the species in the various mesopelagic layers. In one example, the echo sounder saw a very dense layer at 240m that was visible on all five frequencies (Figure 3). A trawl deployed over that layer produced a clean catch of pearlside (Maurolicus australis, Figure 5). The signal was particularly strong at 18kHz due to the swim bladder of the pearlside resonating. We saw layers like this over much of the western end of the north Chatham Rise, but very little further to the east. Trawls on the lighter layers underneath the dense layer mostly caught lanternfish (myctophids) and salps (a gelatinous colonial organism). In contrast, we also identified layers that were only visible on the higher acoustic frequencies (Figure 4) that was mostly krill (euphausiids – small shrimp-like invertebrates, Figure 6). It was interesting to note that any one frequency can be completely blind to organisms of a particular size or physical structure.
Most of the other marks and layers that we saw during the survey contained various species of lanternfish, krill, the occasional larger fish, and many salps and jellyfish. In the future, our aim is to develop metrics that will enable us to objectively discriminate and identify species or groups of organisms solely on the basis of their acoustic properties. Results from this latest mesopelagic survey are promising, but much more needs to be learned. There are numerous other useful applications for underwater acoustics in fisheries science, particularly when it involves multiple frequencies. These include a study on the behaviour of aquatic organisms at multiple scales (individuals to populations), as well as habitat and seabed classifications. There is a wealth of information to discover in acoustics, we just need to listen!
Figure 5: A catch of pearlside (Maurolicus australis).
Figure 6: A clean catch of krill.
Figure 3 (left): The five-frequency echogram of a layer of pearlside. Vertical lines are every 1 n. mile, and horizontal lines every 100m. Figure 4 (right): The five-frequency echogram of krill. Red colours indicate high amplitude echoes, through to blue colours which indicate low amplitude echoes. The green line indicates the path of the trawl. The colour banding in the 200kHz echogram is due to increasing noise levels at the longer ranges.
Fisheries acoustics is a technique that can provide a wide spatial coverage of organisms in water. It suffers from a poor ability to identify the actual organisms that cause the acoustic echoes, but with careful use of other information can yield useful information on the distribution, abundance, and behaviour of fish over large scales. For further information contact: firstname.lastname@example.org
References Coombs, R.F.; Cordue, P.L. (1995). Evolution of a stock assessment tool: acoustic surveys of spawning hoki (Macruronus novaezelandiae) off the west coast of South Island, New Zealand, 1985-91. New Zealand Journal of Marine and Freshwater Research, 29, 175-194. Ministry of Fisheries (2008). Report from the Fisheries Assessment Plenary, May 2008: stock assessments and yield estimates. Ministry of Fisheries, Wellington, New Zealand. 990 p.
sound in marine geological studies The ‘echolocation’ of underwater objects using sound has led to many applications in marine geology as Scott Nodder, Philip Barnes and Geoffroy Lamarche, National Institute of Water & Atmospheric Research (NIWA) Ltd, Wellington explain: The ‘echolocation’ of underwater objects using sound has led to many applications in marine geology from simple echo soundings to determine water depth (or bathymetry) to more sophisticated techniques where the geological structure of the seafloor can be determined several kilometres down into the ocean crust. These applications stem from the recognition that sound waves directed through seawater towards the seafloor (or any other target) are reflected back towards the source, and that recorded differences in the time for the signals to return to the source enable parameters, such as the water depth and depth of sedimentary layers beneath the seafloor, to be estimated.
Echo sounding systems In echo sounding systems, an acoustic pulse, or ping, is generated electronically and the transducer, which is usually mounted in the hull of the vessel, then listens and records the returning reflection, or echo, of the pulse (Figure 1). By measuring the speed of sound in water (approximately 1500m/s), the depth of water overlying the seafloor can be determined from the simple equation: Distance = speed x time/2, where the time is divided by two because this is the time taken for the sound pulse to travel to the seafloor and back to the transducer (known as two-way travel time, measured in milliseconds or seconds).
Single beam echo sounders typically operate at frequencies of 12 to 30kHz, and produce a continuous measurement of water depth directly beneath the vessel as it transits on the sea surface. Such echo sounder technology superseded previous simple measurement techniques where marked, weighted rope lines were deployed manually over the side of the ship to provide an estimate of water depth. It is recognised that the discussion on the propagation of sound in water is simplified here, with the speed of sound in water dependent upon the temperature (T), salinity (S) and ambient pressure (P, or depth). Therefore, corrections must be made in shallow seas where freshwater influences may be prevalent and in the deep ocean where vertical changes in T, S and P can affect markedly the speed of sound in water through the water column. The development of early echo sounders arose from attempts after the 1912 Titanic disaster to acoustically measure the distance to floating targets, such as icebergs. During World War I, the need to detect submarines prompted further research into the use of underwater sound, leading to the development of more sophisticated underwater transducers (or hydrophones) and arrays.
Side-scan sonar Many of the subsequent developments in marine sonar systems also stemmed from military research and applications during and after World War II. These included side-scan sonar, which produces images of objects on the seafloor, not unlike an aerial photograph, using the emission of fan-shaped pulses that are directed sideways and down towards the seafloor from a transducer that is typically towed behind the vessel at a set height above the seafloor (left panel, Figure 2). Sound frequencies used in such systems are generally 100–500kHz, with higher frequencies producing better resolution of seafloor features, such as boulders and shipwrecks (right panel, Figure 2), but less coverage (or swath) of the seafloor. sea surface
acoustic shadow volcano
Ocean Floor Figure 1: Diagram of how an echo sounder works Ref: Discovery of Sound: http://www.dosits.org. All Rights Reserved © 2002–2008, University of Rhode Island, Office of Marine Programs, U.S.A.
Figure 2: Cartoon of the side-scan sonar surveying technique (left panel) with an example of a side-scan sonar image of a shipwreck (right). Note that white areas are illuminated strongly by the side-scan sound waves, whereas the black area is shadowed behind the relief of the shipwreck. Ref: Side-scan sonar schematic from Puna Ridge research cruise website: http://www.punaridge.org, Portions Copyright © 1998 Woods Hole Oceanographic Institution, U.S.A.; Shipwreck image from Environscan, Inc, U.S.A, http://www.enviroscan.com.
Side-scan sonar techniques are used to identify and locate ‘targets’ on the seafloor that might prove problematic for engineering applications in the marine environment, such as mobile substrates (e.g. sand
The continual development of multi-beam technology has been paralleled by an ever-increasing capacity in computer processing. Multi-beam swath mapping techniques are used in many applications including seafloor habitat characterisation for biodiversity studies and the delineation of active faults, volcanoes and submarine landslides and debris flows in hazard analyses.
A further extension of the single beam echo sounder technology was the development of sub-bottom profilers that enabled researchers to image the sedimentary layers and geological structures beneath the seafloor. Based on the principle of seismic reflection, where the path of a sound wave is deflected by an object or by the boundary between two media, sub-bottom profilers generate a stronger sound signal at lower frequencies than echo sounders (e.g. 3-7kHz). Therefore, some of the sound reflects directly off the seafloor while the remainder penetrates beneath the seafloor and is reflected off buried layers and features (Figure 4). In a similar way to echo sounders, the time it takes for the reflected sound pulse to return to the surface is a measure of a specific layer’s depth and geometry. These ‘time profiles’ of the seafloor allow marine geologists to determine the location of active faults, submarine landslides and sedimentary features, such as sand waves, and provides an indication as to the composition of the seafloor sediment (i.e. acoustically transparent units are typically muddy deposits, whereas acoustically opaque units with little seismic penetration may be sands or gravels). Cycles of sea level change between glacial (cold, low sea level) and inter-glacial (warm, high sea level) times can be determined from sub-bottom profiles by the identification of vertically stacked, sediment packages, separated by erosional surfaces. The relative timing of near-surface fault activity, slip rates and hence potential earthquake magnitudes can also be established from such records, in combination with dated sediment cores.
Multi-beam echo sounders took side-scan sonar developments and transferred this to echo sounders. Such multi-beam systems were developed extensively in the 1970s, using arrays of multiple, hull-mounted transducers (up to 100) to produce a fan-shaped ‘swath’ of sound that was used to generate unprecedented, detailed bathymetric maps of the seafloor (Figure 3). These transducer arrays harnessed the rapid technological advances in computer power that enabled the numerous, and almost simultaneously returning, echoes from the seafloor to be recorded. Unlike conventional single beam echo sounders, 100% coverage of the seafloor is possible with multi-beam systems with swath-widths in the ratio of about three times the water depth (i.e. in 1km of water, a total swathwidth on both sides of the vessel of 6km is possible). Typical deep water multi-beam systems operate at frequencies of 12 or 30kHz with shallow water systems (<100m) using higher frequencies in the order of 300kHz. These systems are calibrated by undertaking routine deployments of sound velocity probes that measure changes in the speed of sound in water down through the water column. Further information on the type of sediment or rock present at the seafloor can be ascertained after extensive processing of the back-scatter intensity signal also contained in the multi-beam data (top panel, Figure 3).
Figure 3: Example of seafloor multi-beam mapping imagery. The upper panel shows grey-scale backscatter intensity image of a gravel-sand wave field on the seafloor in Cook Strait, southern North Island. The lower panel shows colour- and relief-shaded bathymetry of the same area (blue is relatively deep, red is shallowest). The sediment waves shown here are 3-4m high. Images courtesy of Geoffroy Lamarche and Anne-Laure Verdier, NIWA.
science teacher 121
waves, ripples), scour marks and channels caused by seabed erosional processes, gas- and fluid-expulsion features (e.g. pockmarks), large boulders and exposures of bedrock. Side-scan sonar has also been applied to seabed searches for shipwrecks and downed aircraft, and for investigating the engineering integrity of marine structures, such as pipelines, telecommunication and electrical cables, oil platform anchoring points and wharf piles.
Figure 4: Example of a 3.5kHz sub-bottom seismic reflection profile. The scale on the left is in milliseconds (ms) two-way travel time (TWTT) where 50ms time is equivalent to 40m of sediment at a sound velocity of 1600m/s. VE 9.5 is the vertical exaggeration of the profile. This profile of the continental shelf of Hawke Bay, eastern North Island, shows a layer of acoustically transparent mud that has accumulated during the last 18,000 years. HST, is what geologists call the ‘Highstand Systems Tract’ deposited when sea levels reached their highest extent after glacial low-stands in sea level. During the last glacial, sea level dropped 100–120m below present-day sea level due to the formation of large polar ice sheets. The uppermost mud layer lies on a strong erosion surface defined by truncation of the reflections beneath it. These lower reflections are from older sediments, including glacial-aged material, that are known as the ‘Lowstand Systems Tract’ (LST), which was formed when the sea level was at its lowest point. Image courtesy of Philip Barnes, NIWA.
Explosions and seismic reflection A more ‘explosive’ development in seismic reflection techniques has been the rise of compressed air guns and
chemical explosives as sound sources since the 1960s. Such systems produce bubbles of air that are associated with low frequency sound (10-500Hz), which enables greater penetration and imaging of the seafloor subsurface than is possible with very high frequency subbottom profilers. During seismic surveys, air guns are towed behind the vessel and the reflected sound is recorded by a towed array of hydrophones (referred to as the seismic ‘cable’ or ‘streamer’) (top panel, Figure 5). The digital recordings
Figure 5: Example of the multichannel seismic reflection technique. The cartoon (upper panel) illustrates the method whereby air gun shots (red star) are released in the water behind the survey vessel, and the reflected sound arrivals are recorded on multiple hydrophones towed in a cable (or ‘streamer’). The lower panel shows a seismic reflection profile from 3000m water depth off the east coast of the North Island, highlighting seismic reflections (some coloured and numbered) offset by active tectonic faults at the edge of the Australian-Pacific plate boundary zone. The scale on the left is in seconds (s) two-way travel time (TTWT) where 1 second time is equivalent to 1km of sediment at a sound velocity of 2000m/s. VE 3.0 is the vertical exaggeration. M is a multiple reflection of the seabed, resulting from sound passing through the water column twice, hence its later, and ‘deeper’, recording on the seismic reflection profile. Multichannel seismic reflection image in lower panel courtesy of Philip Barnes, NIWA.
of the reflected sound are processed on computers to generate a multichannel seismic reflection profile (lower panel, Figure 5). In modern oil exploration and academic studies of sedimentary basins and geological structure, arrays of multiple air guns may be deployed as the sound source, with total capacity up to 4000 cubic inch of air released under very high pressure (c. 2000psi). The interval between each air gun shot, referred to as the shot spacing, is typically in the order of 25–50m (i.e. a shot about every 10–20 seconds). Very large seismic sources enable sub-bottom imaging of up to 5–15km beneath the seafloor. Seismic streamers containing multiple groups of hydrophones, enable seismic data processors to remove much of the erroneous acoustic noise from the digital seismic data, thus improving the quality of the seismic signal, as well as providing other information about the physical properties of the sediments and rocks beneath the seafloor. Industry-standard seismic streamers are now 6-12km in length, and have up to 480–960 channels (known as hydrophone groups). A standard practice is to deploy a single seismic streamer, enabling a single 2-dimensional seismic reflection profile to be produced (lower panel, Figure 5). Many oil companies, and even a few research institutes, also collect 3-dimensional seismic images in targeted areas. In this technique, the same air gun shot is recorded on multiple streamers towed in parallel behind the vessel. Extensive processing of such geophysical data is required to account for such parameters as the offsets between the source and streamer, different delay times of reflected sound arriving along the streamer, and to compensate for artefacts, including multiple reflections of the same features. Seismic reflection profiles (or sections) provide a wealth of geological data, including the position and thickness of sedimentary layers, the location of tectonic faults, and the characteristics of potential oil-bearing reservoir rocks. Multichannel seismic reflection data are used to interpret the sedimentary infill and the underlying structure and tectonic history of offshore basins, with widespread applications in the oil and gas exploration industry, as well as research interests in plate tectonics, climate variability and sedimentology. Acoustic pulses that penetrate into the seafloor are not only reflected, but they may also be refracted, or bent, along the surfaces of geological layers of differing density. By using a source that is remote from the receiving, or hydrophone system, such as with ocean seismometers or sonobuoys, the complex pathways taken by refracted sound waves within the sediment layers can be ascertained, and provide detailed information on the different speeds of sound and hence densities of buried layers. These data are used to obtain actual measurements of the thickness of individual sedimentary units in order to better constrain the sedimentary and tectonic history of offshore basins. For further information contact: email@example.com
NZASE needs you!! Are you enthusiastic about science education? Join the NZASE today! The NZASE is currently looking for enthusiastic people to assist its standing committees. For only $50 pa you can become an individual member and receive all the member benefits including three issues of the NZST per year. Don’t delay, contact us today: firstname.lastname@example.org or www.nzase.org.nz
To explore Thai learners’ views about the nature of science, thirty-six secondary school science teachers and nineteen university science faculty staff were invited to draw a scientist and then discuss their drawings. Can ‘drawing a scientist’ be similarly fruitful in schools? Yes, as Miles Barker, University of Waikato, and Sirinapa Kijkuakul, Narsesuan University, Thailand explain: Introduction Over the last twenty years, learning about the Nature of Science (NoS) has gradually become a prominent part of national science curricula in many countries, including Thailand and New Zealand. Broadly speaking, the NoS is concerned with knowledge about science (or knowledge of science), in contrast with traditional knowledge in science, i.e. facts and concepts in ecology, biochemistry, electromagnetism, etc. Learning about the NoS encompasses understanding the particular features of scientific knowledge and how they differ from other types of knowledge; about how professional scientists ‘do’ science; and about how the enterprise of professional science and the lives of citizens who are not scientists interact with each other. But how can NoS be taught in school and university classrooms? This paper describes one pedagogical approach: inviting learners to draw a scientist. We detail what the drawings were like, the classroom discussions that took place, the learning that occurred, and the research conclusions that we arrived at. Although the data were gathered in Thailand, we would contend that what the article reveals about perceptions of science in Thailand is, in fact, very much paralleled by New Zealand perceptions. In other words, while these perceptions of scientists may be coloured by cultural differences to some degree, they can be meaningfully transferred to other countries.
Background to Nature of Science Although a focus in school science on the nature of science and its relationship to technology and society had long been advocated – for example, by Pella, O’Hearn and Gale in the United States in the 1960s (refer Barker (2004) – developments were slow, mainly because of lack of philosophical consensus about the nature of science. However, under the auspices of the Science for All Americans programme the work of Rutherford and Ahlgren (1990) provided a way forward. They articulated thirteen propositions about the nature of science grouped into three categories (refer Figure 1). During the 1990s it was argued by many including: Claxton, Black, Millar and Osborne in Britain; by Bybee in the USA; by Matthews in Australia; and by Hodson in Canada, that the NoS was an essential and practicable element for future school curricula. In Asia, the need for an appreciation of the nature of science was picked up by Holbrook and Rannikmar, working in conjunction with the UNESCO Principal Regional Office in Bangkok (Ref 4). The emergence of the NoS has generally resonated positively with a number of related movements: Science,
Features of the Nature of Science The Scientific World View • The world is understandable • Science ideas are subject to change • Science knowledge is durable • Science cannot provide complete answers to all questions
draw a scientist
Scientific Enquiry • Science demands evidence • Science is a blend of logic and imagination • Science explains and predicts • Scientists try to identify and avoid bias • Science is not authoritarian The Scientific Enterprise • Science is a complex social activity • Science is organised into content disciplines and is conducted in various institutions • There are generally accepted ethical principles in the conduct of science • Scientists participate in public affairs both as specialists and as private citizens Figure 1: Features of the Nature of Science (NoS) expressed as thirteen propositions, grouped into three categories, according to Rutherford and Ahlgren (1990). Technology and Society (STS) - Canada; Public Understanding of Science (PUS) - Britain; Scientific Literacy, for example, Twenty-First Century Science - Britain. However, its development has not been as rapid as many would have liked (Ref 7). Some consider such catalogues of propositions, as that of Rutherford and Ahlgren, to be unacceptably universalist (Ref 3), and there remains a tendency to conflate the NoS with students’ classroom science process skills (Ref 1). Again, and significantly for this article, there are ongoing suggestions that it is not only students who hold unacceptably naïve and empiricist views of science, teachers and teacher educators may hold similar views. For example, Gough (1998) points out that although some teachers may profess a sophisticated interest in multimedia presentations with NoS themes beyond the classroom, when they come to present ‘the scientific method’ to their students, they stereotype and mythologize the way science knowledge is produced. This is compounded by evidence that teachers lack adequate NoS pedagogical content knowledge (Ref 12) – in other words, they lack the appropriate classroom ‘tricks of the trade’ to initiate and sustain teaching and learning episodes specifically focused on the NoS (Ref 2). ‘Drawing a scientist’ is a further attempt to address this pedagogical vacuum. Both of us have previously developed teaching materials for exploring the NoS in classrooms: Miles has developed numerous stories from the history of science which illustrate features of science (Ref 5); similarly, Sirinapa has developed an innovative approach to the teaching of photosynthesis in which traditional content learning occurs in parallel with
Table 1: Comparing the treatment of NoS in Thailand’s National Science Curriculum Standards (Ministry of Education, Thailand, 2001) and The New Zealand Curriculum (Ministry of Education, New Zealand, 2007). Treatment of NoS
Thailand’s National Science Curriculum Standards
Stranded structure of ‘Nature of Science and Technology’ is one of eight the curriculum sub-strands
In the Science Learning Area there are five strands: ‘The Nature of Science’, and four contextual strands
Relationships between the NoS strand and the other strands
‘The Nature of Science’ is the “overarching, unifying strand” (p.28).
The ‘Nature of Science and Technology’ strand feeds into all of the other seven sub-strands and hence generates the overarching goal of ‘Science for Life’ (p.31).
Learning levels and NoS Standards for ‘Nature of Science and Technology’ are defined for each of years 1 through 12. Statements about the important of NoS
Achievement objectives for ‘The Nature of Science’ are presented as eight levels of schooling, grouped in four pairs across years 1 through 13.
The second of the seven “The core strand, Nature of Science, teaching/learning aims for science is required learning for all students is “to understand the scope, up to year 10” (p,29). limitations and nature of science” (p.4).
learning about the NoS from historical anecdotes about the discovery of photosynthesis (Ref 8).
NoS and the science curriculum
The New Zealand Curriculum
The treatment of NoS in Thailand’s National Science Curriculum Standards (Ref 10) and in The New Zealand Curriculum (Ref 9) shows some structural differences, but both documents accord very high importance to NoS (see Table 1). Again, New Zealand science educators would feel that they are on familiar ground when they examine the actual content. The Thai National Science Curriculum Standards highlights a need to understand the science world view. A fundamental requirement is being able to recognize that, in a different way from other types of knowledge, scientific knowledge is generated from questions which are actually ‘subjectable to investigation or experimentation in a comprehensive way’ (Ref 10, page 27). However, this knowledge is ‘subjected to change when new data and additional evidence crop up’ (Ref 10, page 29). In terms of the knowledge so generated, we need to comprehend that ‘science is a global culture for our knowledge-based societies’ (Ref 10, page 1). The nature of scientific enquiry, described as ‘reasoning, creating, analyzing criticizing, inquiring, solving problems systematically and making decision … based on diverse data and verifiable evidence’ (Ref 10, page 1) and its relevance for our daily lives is similarly emphasized. Scientists do not necessarily pursue these processes as individuals – ‘shared responsibility’ is a key feature of scientific inquiry (Ref 10, page 29). Finally, knowing about the scientific enterprise and its role in society at large is required. Because ‘science permeates into everyone’s daily routine and profession’, we need to be able to ‘use our scientific knowledge reasonably, creatively, responsibly and ethically’ (Ref 10, page 1). This requires learners, as future citizens, ‘to have attitude, moral and ethical attributes appropriate to the practice of science’ (Ref 10, page 3) and hence to be able to refrain from ‘exploit(ing) science solely for one’s own better quality of life’ (Ref 10, page 1). Rather, we ‘should use it to guide us towards better utilization, preservation and even development of the environment and natural
resources with equilibrium and long-term sustainability in mind’ (Ref 10, page 1). In summary, the national science curricula in Thailand and New Zealand explicitly require that considering the NoS should occur centrally at all levels of school science. What are the pedagogical implications and how might this be done?
Methodology There were two cohorts of Thailand science teachers: 36 secondary school science teachers (11 men, 25 women) in and around a medium-sized city; and 19 staff (5 men, 14 women) from a Faculty of Science at a medium-sized university. In groups of between two to five teachers, they were presented with the following task: As part of a display to encourage Middle School children to think about their own future careers, you have been asked to DRAW A SCIENTIST. Your group has ten minutes to come up with a drawing. Your drawing can be as imaginative, colourful and enticing as you like. Each group was provided with a sheet of paper (50cm x 40cm) and coloured pens. At the end of ten minutes, the drawings were placed together on a large table, and were discussed. In turn, each group commented on their drawing, and answered questions. Later, the authors coded the 17 drawings as follows: T1 to T9 for the nine groups of secondary school science teachers; and S1 to S8 for the eight groups of university science teachers (who also could be considered as professional scientists). The authors analyzed the drawings according to what they suggested about whom scientists are. And to what they implied about the three categories of the NoS presented by Rutherford and Ahlgren (1990), namely: the science world view; scientific enquiry; and the scientific enterprise. The analysis was checked by a Thai university faculty member who specializes in educational psychology. The purpose of the analysis was not, of course, to seek statistically significant differences between the views of the two groups of teachers (although some interesting comparisons will be suggested in passing). Rather, the purpose was, first, to provide an idea of the range of
The Drawings and the Discussions The drawings could be positioned on a spectrum ranging from a stereotyped, constrained and even slightly sinister figure to a view of scientists as everyday people (in appearance) engaged in directly humancentred activities in the wider world (Figure 2). Within the spectrum there were drawings which emphasized particular aspects – for example, the relationship between science technology and economics; and the centrality of values and emotions in science (Figure 3). A more detailed analysis follows. The drawings revealed ideas about who scientists are in terms of gender and age. The teachers depicted four males, and two females and (deliberately or unintentionally) the gender of three ‘scientists’ was unclear. The science faculty drew respectively three, four and two ‘scientists’. A spread of ages was evident: the teachers drew one teenager, four adults, one older adult, and three ‘scientists’ of indeterminate age. The science faculty drew one girl, six adults, and two older adults. The exercise was not very successful in revealing how much these participants understood about the abstract domain of the science world view such as if science ideas are subject to change over time. Nearly all the ‘scientists’ seemed contemporary people, with one exception – a teacher drawing which showed ‘a future scientist’ with an enlarged left brain packed with formulae and a question mark, and semiquavers emanating from a normal-sized right brain. In the drawings there was rich data about scientific enquiry. Where this takes place could sometimes be inferred. One teacher and one science faculty diagram included laboratory coat and chemicals (Figure 2 - T3). By contrast, five of the science faculty specified contexts in the wider world but only two of the teachers did (Figure 2 -S4). The remaining six teachers and two science faculty staff did not indicate any specific context. Sometimes, even quite literally, the drawings suggested a wide variety of views about whether, and in what combination, scientific inquiry is a ‘heads’ (intellectual), ‘hearts’ (emotive), or ‘hands’ (action-based) process. Three
teacher drawings portrayed the scientist’s head as being disproportionately very large but no science faculty drawings showed this (Figure 2 - T3). Interestingly, the lower body was included in six of the teachers’ drawings and excluded from six of the science faculty drawings (Figure 2 - T3, S4 and Figure 3- S6). An intellectual dimension was conveyed using works such as ‘smart’ and ‘logic thinking’. An emotive dimension was suggested by words like ‘kind’, ‘moral’ and ‘be gentle’ (Figure 3 - S6). Two drawings – one teacher and one university – actually included a ‘love’ heart and two science faculty drawings included scientists’ personal family relationships. The ‘hands’ aspect was conveyed by enlarged hands or muscles, or by showing scientists manipulating laboratory apparatus including computers or telescopes (Figure 2 - T3). ‘Hands’ were also shown engaging in pursuits where direct action in the wider world was a clear consequence (Figure 2 - S4). In summary, although three teacher drawings (but no science faculty drawings) depicted a ‘heads’ only dimension, all the other drawings showed rich combinations of the ‘heads’, ‘hearts’ and ‘hands’ dimension of scientific inquiry. Inferences could also be made about the participants’ thinking concerning the scientific enterprise. That ‘science is organized into content disciplines’ was suggested in only four of the teacher drawings, but in six of the science faculty drawings (Ref 11). Physics, chemistry and biology were the major disciplines cited, with one mention each of astronomy and environmental science; science teaching or education was alluded to in two science faculty drawings. ‘Particip(ation) in public affairs’ was highlighted in one teacher diagram and three science faculty diagrams (Ref 11). For example, ‘sharing and understanding society’, ‘service mind’, ‘save ocean/energy’, ‘sociality’ (Figure 2- S4). But it was not easy to ascertain whether the scientists’ participation was ‘as specialists’ or ‘as citizens’ (Ref 11). Three science faculty drawings (but no teacher drawings) mentioned ‘ethical principles’ such as ‘honest’, ‘good moral’ (Figure 3 - S6). The two discussion sessions, immediately after the drawing process, were times of alert interest, thoughtfulness and some hilarity. As the above analysis suggests, the ideas in aggregate on each occasion ranged widely over the Rutherford and Ahlgren framework, and hence there were rich learning
Figure 2: Drawing T3 (left), which suggests a traditional, intellectualist, laboratory-bound view of a socially isolated scientist, pursuing discipline-structured knowledge; and S4 (right,) a more contemporary, socially interactive and environmentally aware view of scientists engaged in an eclectic pursuit of knowledge.
science teacher 121
views current among teachers at secondary and tertiary level; and, second, to suggest that ‘Drawing a Scientist’ when carried out by school students, might generate a worthwhile contribution to the learning which is prescribed in the national Thai science curriculum ‘Substrand Eight: Nature of Science and Technology’ and in The New Zealand Curriculum.
Figure 3: Two more-radical views, less focused on scientific knowledge. Drawing T2 (left) suggests that the entrepreneurial pursuit of science-related technology can, desirably, yield quick financial success. (Those are Thai baht already in the young man’s pocket.) Drawing S6 (right) emphasizes that fulfillment for a scientist may result when a ‘smart’ engagement with knowledge is embedded in appropriate values and emotions. opportunities about the NoS for everyone in this sharing process. Questions which were raised and discussed included: “In what sense is a young girl a scientist?” (and hence: “Can a baby be a scientist?”); “Is ‘nano tech man’ actually still a scientist or is he now a technologist?” (Figure 3- T2); “Is ‘carrying the whole world’ too much responsibility for scientists?” and “Is scientists’ decision-making different from everyday life?” As suggested above, on both occasions the major point on which NoS learning leverage did not occur was probably the question of how science knowledge is sometimes ‘durable’ and how, sometimes, ‘science ideas are subject to change’ (Ref 11)).
Conclusion It was not the intention of this paper to seek out significant differences in the way secondary school science teachers and university science faculty staff might perform the ‘drawing a scientist’ task, nor even how these two particular groups approached it. However, what we would suggest is that with these two groups, ‘drawing a scientist’ proved to be a motivating and appropriately challenging task with which they engaged seriously, and from which, communally, they generated composite, more wide-ranging and more astute notions not only of scientists but of the nature of science at large. Our questions are these: Would the outcome be as fulfilling for Thai or New Zealand school students? Would this activity, in practice, help to advance the intentions of the Thai national science curriculum, especially sub-strand 8: “Nature of science and technology”; and the New Zealand science curriculum? Would students construe the task in a frivolous way, draw demonic Frankenstein figures and Einstein parodies, and produce little that has leverage for learning in the subsequent discussion? Or would they produce ideas about scientists that might disabuse each other’s misconceptions and produce novel and more sophisticated illuminations about the nature of science? But do science teachers themselves have a sufficiently clear knowledge of the NoS curriculum requirements in order to facilitate their students’ learning towards these more fruitful understandings?
What additional learning would teachers feed into this situation? How could the ‘drawing a scientist’ activity be integrated into a structured systematic teaching/ learning programme about the nature of science? And could ‘drawing a scientist’ thus be a stepping stone toward other school student learning goals: engaging in socio-scientific issues of health and the environment, and achieving a greater degree of scientific literacy? We invite you to explore ‘drawing a scientist’ further and to let us know what you discover. For further information contact: email@example.com
Acknowledgements We are grateful to the Faculty of Education, Naresuan University, for making contracted funding available to enable us to work on campus in a co-researcher relationship in February-March 2009. We also thank all the participants, especially Ms Chanadda Poohongthong.
References 1. Abd-El-Kharlick, F., Bell, R., & Lederman, N. (1998). The nature of science and instructional practice: Making the unnatural natural. Science Education, 82(4), 417-436. 2. Abd-El-Kharlick, F., & Lederman, N. (2000). Improving science teachers’ conceptions of the nature of science: a critical review of the literature. International Journal of Science Education, 22(7), 665-701. 3. Alters, B. (1997). Whose nature of science? Journal of Research in Science Teaching, 34(1), 39-55. 4. Barker, M. A. (2004). Key aims for science education in New Zealand schools in the 21st century – messages from the international literature. Wellington: Ministry of Education. 5. Barker, M. A. (2006). Ripping yarns: a pedagogy for learning about the nature of science. New Zealand Science Teacher, 113, 27-37. 6. Gough, N. (1998). “If this were played upon a stage”; school laboratory work as a theatre of representation. In J. Wellington (ed.) Practical work in school science: Which way now? London: Routledge. 7. Hipkins, R., Barker, M., & Bolstad, R. (2005). Teaching the ‘nature of science’: Modest adaptations or radical reconceptions? International Journal of Science Education, 27(2), 243-254. 8. Kijkuakul, S. (2006). Case studies of teaching and learning about photosynthesis in Thailand: An innovative approach. Unpublished PhD thesis, Kasetsart University, Bangkok, Thailand. 9. Ministry of Education, New Zealand (2007). The New Zealand Curriculum. Wellington: Learning Media. 10. Ministry of Education, Thailand (2001). National science curriculum standards. Bangkok: Institute for the Promotion of Teaching Science and Technology. 11. Rutherford, J., & Ahlgren, A. (1990). Science for all Americans. New York: Oxford University Press. 12. Shulman, L. (1987). Knowledge and teaching: Foundations of the new reforms. Harvard Educational Review, 57, 1-22. 13. Twenty-first Century Science (2005). http://www.21stcenturyscience. org (retrieved April 14)
What differences in the goals of inquiry separate pursuits like science that are literal-minded to the nth degree from myth-making, art-ofmemory-based thought structures like those in Matauranga Maori? Philip Catton, co-ordinator for History and Philosophy of Science, University of Canterbury explains the singular ideal that science works to, contrasting this to goals worked to by people whose culture is oral. When New Zealander Ernest, Lord Rutherford died, a bunch of really top class physicists at a conference fell into a discussion of an unusual kind. The discussion was historical and it concerned the question ‘by how many years would the advancement of atomic physics have been held back had Rutherford never existed’. The group was well qualified to consider this question, and after talking through its ins and outs, the group concluded that had Rutherford never existed then this would have held back the advancement of atomic physics by as much as seven years. We can think about that figure of seven years as a long time, given how many genius minds were working on physics all that while. We can also think of it as a short time. (This vaunted effort by New Zealand’s favourite son did nothing that, without him, a mere seven years of work by others could instead have produced.) Thus our pride in New Zealand can be knocked upwards or downwards depending on how we think of the figure of seven years. What really we should be thinking about, however, is the weirdness and the uniqueness of the question, by how many years a certain way of thinking would have been held back had some particular person not existed. This implies an expectation that the intellectual deliverances of science will be supremely independent from the contingencies of personality, background, historical circumstance, etc. of the people who do the science. The idea (and I think it is an exaggeration, but not that much of an exaggeration) is that science just makes our ideas fit the world. It’s an impersonal thing it’s doing. It’s achieving an objective point of view. Somehow people fall into alignment with one another in ways that make them mutually subject to the ideal of objectivity. Somehow they even manage at various junctures to get so close to achieving that ideal that what they think becomes far independent of human factors, and far dependent instead just upon the way the world is. How is all this possible? Well, it’s important that people share in an unusual ideal for their thinking and their talking. They need all to suppose together that what distinguishes a wellfunctioning thing to say or think from an ill-functioning thing to say or think has nothing to do with personalities, and has nothing to do with personal authority. The ideal for their thinking and their talking is instead completely impersonal. The question whether something is a wellfunctioning thing to say or think is just a question about whether you would still be thinking that way if you had
somehow considered everything, and brought it all into best perfected rational order. Thus the right way to think coincides with God’seye truth, an all-things-considered, rationally-bestsystematised understanding. What agrees with this is true, what doesn’t is false, all very impersonal, nothing to do with personalities at all.
Memorability If this discussion is correct, however, can we expect people whose culture is oral to distinguish wellfunctioning from ill-functioning things to say or think in the same ultra-impersonal way? Plainly we cannot. Or so, I believe, anyone will conclude who read and concurred with my article in Issue 120 of the NZST entitled ‘Is Matauranga Maori Science?’ In the context of an oral culture, one thing that really matters is memorability. A chosen way to talk or to think must fit with the demand for memorability, or it is no good. So it is, in the relevant way, well-functioning for you to think or talk of someone saying “roger, roger” into a cellphone while driving to the beach in a Ford. This was my example in my previous article of a mnemonic for remembering the name ‘Roger Sandford’. Memorability is everything if you have to use your memory in the utterly extensive way that is required in the oral cultural circumstance. Thinking or saying only what is literally true would constrain the memory arts so severely as to destroy them. So, in the oral cultural circumstance, no one could afford to think that what separates a wellfunctioning from an ill-functioning thing to say or think is just this connection to an impersonal ideal. Another reason why the ideal of all-things-considered, rationally-best-systematised ways of talking or thinking is ill-fitted to an oral culture, is that it has this amazing lookout for considering all things. In oral cultures, you can, it is true, consider a lot of things. (Probably not, however, something as arcane as, say, the biochemistry of the muscular contractions of the mandibles of some specific species of beetle.) It’s amazing how full of information people’s minds are in oral cultures. They are vastly more interesting than your or my mind for they contain vast mnemonic structures. But, there is a limit; there is a limit to the capacity even of a mnemonically-aided mind for oral memory. So, in the oral cultural context, consideration of everything is an unreasonable aspiration. Moreover, anyone who is a member of a society whose culture is oral knows that. Consequently, orality of culture means that the ideal of reason that underwrites science cannot be useful. Such an orientation of the mind or of discussion would be harmful to people’s ability to get on. Matauranga Maori, in particular, is bound thus to be marked in its form by the exigencies of the oral cultural form. It does not reflect the impersonal standard for a kind of God’s-eye truth that conditions theoretical science. In its every aspect it is richly taken up in the forms of mnemonic arts. These facts about it are as they must be, because Matauranga Maori is the accomplishment of a people that used oral means to
matauranga maori, science, and truth
propagate down the generations the knowledge that that people needed to live by. Matauranga Maori is, in a word, more mytho-poetic in quality, and less weirdly and strangely − and yet beautifully literal-minded − than is contemporary theoretical science.
Literalness and literacy There is some connection between the literal qualities of mind that are taken to the nth degree in natural science, and the thoroughgoing assimilation to culture within the ambient society of the technology of writing. Literature, and thus literacy, on the one hand actually condition the possibility of literalness of mind on the other hand. ‘Literacy’ in any society in which the concept has application comes, it is true, to have a positive connotation; and the connotation of the concept ‘illiterate’ is extremely negative in just about any society in which the concept of literacy is even available. For, in any society that has deeply invested itself in the technology of writing, anyone who is illiterate is in a very invidious position. In a society that has writing all around, it is a terrible thing to be an illiterate. We need to keep in mind however, that non-literacy, orality, is an utterly different condition. It is a condition in which people’s minds are used − and used richly and fully − so non-literacy is a condition in which people’s minds are made far more interesting than any of ours are. They are vehicles for the culture; they are, to that extent, like libraries, and they are rich and interesting. Non-literate minds are, however, organised far differently from libraries. They do not reflect the literal-mindedness of science. The literal-mindedness of science is taken to the nth degree in such facts as that a person might spend an entire career studying the biochemistry of the muscular contractions of the mandibles of some specific kind of beetle. They might really want to know exactly why chemical reactions are capable of making those muscles contract, and the mandibles operate scritch-scritch, so that the beetle can carry things with its mouthparts or crush prey or seeds and survive as it does. Who would want to know such things? Well, an utterly literal-minded people might, that has no thought whatsoever for the conservation of intellectual effort, since it ramifies role-diversification to such an extent that someone can spend a career researching the biochemistry of the muscular contractions of the mandibles of the specific kind of beetle in question. In that kind of society, people can take on board just such utterly specialised intellectual projects, and really be worried about what they are saying biochemically about the muscles of the mandibles of some particular beetle, is what they still would be saying if they were to consider everything and manage somehow to systematise it rationally as well as it possibly could be. Such grand literal-mindedness is weird, and requires very special cultural circumstances in order to start. I rather think that science is wonderful; I am personally glad to live in a society that is deeply invested in just such strangely arcane and expansive theoretical endeavour. But I concede nevertheless, that this orientation comes with an enormous cost for society. All peoples of the world are now caught up in this scientific exercise, and so for all peoples of the world, the totality of culture is beyond anyone’s comprehension. In the case of every one of us, the tiny part of total culture that we inherit is special, and in high degree non-overlapping with the tiny part of total culture that any specific fellow of ours inherits. So we are separated from one another, we are without rich common understandings, and we suffer ill effects from this psychologically and socially there is no doubt.
There are questions, serious questions, about the longterm sustainability of this cultural form. It has consolidated itself in the world only ever so recently. It seems to have headed us along dangerous paths. There is no saying for sure that it will work out. Wonderful in certain respects, it is costly in others, and although I am glad to be alive at the time of this social experiment, I do not say that the social form in question is superior, or best.
Scientific revolution In my day job I study the phenomenon of the rise of science historically. These historical preoccupations of mine sit behind my remarks in this present series of three articles about Matauranga Maori. The long-term history of present-day science involves two utterly major changes to the social forms surrounding inquiry. One of the discontinuities was about 350 years ago, at a time now sometimes called ‘the Scientific Revolution’, when, among people in Europe as it happens − though what happened among them is so strange and so weird that the question to ask is hardly “why Europe not elsewhere?” but rather “why anywhere at all?” − there got to be a new orientation in theoretical inquiry that on the one hand maintained its connection to the goal of God’seye truth that had been around for a while, and on the other hand connected it to practical experimentation. The earlier discontinuity took place more diffusely in various parts of the ancient world that had long been agricultural and had metallurgy and cities. It created the goal of God’s-eye truth. It created new aspirations for knowledge of the world, aspirations that would not be much fulfilled throughout the following two millennia, though they would be maintained in the social forms all that while. The seventeenth century ignition of science complements the ancient world development of literal-minded intellectual bearings. And that ignition would cause, in short order, a veritable explosion, as we know. You can spend a lifetime studying the relevant seventeenth century intellectual scene and develop a fairly complete overall appreciation for the intellectual ins and outs of those amazing times. Because of the intellectual explosion that then ensued however, a lifetime’s work on eighteenth century developments can barely scratch the surface of what was going on then. And the history of nineteenth century science is rich beyond countless lifetimes of historical exploration. By the twentieth century there is simply no way to begin to picture the wealth of what is going on in science. Culture has proliferated far, far beyond being able to fit into a mind. And there are costs and benefits from that. It’s a remarkable, perhaps dangerous, certainly exhilarating, way for humanity to go. I again emphasise, however, that separating the presentday from conditions of inquiry in an oral culture there is not only the discontinuity of 350 years ago, and the explosion that ensued from that, but also the discontinuity of roughly two or two-and-a-half thousand years before that, which seeded aspirations for theoretical, literal knowledge into culture for the first time. I do emphasise that inquiry is a trait of all peoples at all times. Everyone inquires about their world, and needs to do so in order to secure the capability of surviving and flourishing, and because it is interesting. All peoples inquire; but the kind of inquiry that we call science is weird and new. To understand science you do need to study the changes of 350 years ago, but you also need to study in the ancient world the factors that drew some societies away from the oral cultural form.
The ancient world and science These changes began about 12,000 years ago when in
only when the vast investment in writing has been made, that altogether liberates people from relentless practise of the memory arts. People who no longer practise memory arts will have less interesting minds, it is true, but they will also have opportunity to do with their minds things which mythmaking art-of-memory-oriented peoples never do with their minds. People who were no longer practising the memory arts could for the first time afford to cleave to a strange new ideal of literal truth. It had become not an outrage against the cultural conditions of life, but a new possibility and ever-increasingly natural thing to be oriented to the unlikely, weird, singular ideal for thought of all-things-considered, rationally-best-systematised thinking. In fact, between 3,000 and 2,500 years ago, this singular ideal for thought began to express itself in the religious forms of various peoples. The idea of God is very much just the personification, or hypostatisation, of the ideal of all-things-considered, rationally-best-systematised thinking. And remarkably enough, between 3,000 and 2,500 years ago, you begin to see profoundly monotheistic traditions of religious thought. People are saying: yes, we are oriented this way, towards this unitary ideal.
science teacher 121
region after region, human population rose above the carrying capacity of the land at least given the prior material arrangements. People began to plough the land; it was arduous work, but it was necessary in order that people in their new numbers would not starve to death. In Mesopotamia there were lands particularly able to be exploited in this way, so that is where agriculture first commenced. This cultural shift, which very much depended upon metallurgy, not only accommodated the population increase up to that time, but also helped make possible an accelerated further expansion of the total human population in that region. Note that this momentous shift in the material conditions of life not only requires but also immediately further promotes the diversification of social roles. An economy among diverse craftspeople and workers with different skills and forms of fortitude must obtain in order for the large technological commitment and prodigious human effort associated with agriculture to be viable. Agriculture concentrates the production of food for humans into very much higher yields per acre, and so makes feasible the agglomeration of large numbers of people into cities. Consequently, once there was agriculture, the world’s first cities arose. Cities in ancient times would survive for a while, fall over ecologically, arise again in the same or some different location, survive for a while, fall over ecologically, and so it would go. But all the while people were better equipping themselves culturally for urbanised life. They were growing better accustomed to vast ramification of social roles, to life in a condition where shared understandings are by no means an adequate guide for functioning within this or that specific walk of life. Remarkably enough, through seven thousand years of this kind of experimentation and change − with societies growing to a size and degree of social complexity that must have been an absolute strain on the oral arts of memory − people did not far diverge from the use of oral means for holding onto knowledge. It was only in an extreme condition of need that writing began to be used (about five thousand years ago). Moreover, when people began to use writing, they did so with an enormous sense of regret. They had a sense of cheating against the former cultural form which they knew, and which they loved as they loved themselves. None of this surprises me. The cultural form that was under threat was one which aided people to know what their fellows are about. To an extent that was significant because memory arts can contain so much, the oral cultural form made each person a vehicle for culture itself. People felt honour from this and horror against losing this, yet lose it they would when writing displaced the functioning for memory of myth-saturated minds. You can tell how intense the horror was, and you can tell this not only from the fact that it took agricultural peoples roughly seven thousand years to come to the point of turning to writing at all, but also you see in the records of peoples that had developed writing strong expressions of resistance to the displacement of oral traditions and passionate regret about the harm that writing seemed to be causing to people’s memory powers. For the first two millennia after the invention of writing roughly, i.e. up to about 3,000 years ago, peoples who possessed writing reserved for really very special purposes how they used it. They resisted any wholesale cultural investment in this technology. It is not until 2,500 years ago that you find societies so far invested in this technology as to be fundamentally reoriented in their very manner of thought and discussion. Yet there are potentials for the mind that are unleashed
Towards considering all things You see this orientation developing into their patterns of discussion with their fellows. How am I to discuss things with my fellows if I am literal-minded? Well, a first condition is that if I find myself in disagreement with a fellow on some point, then I will consider that, at most, one of us can be right. To be literal-minded is to think that where there is disagreement, it isn’t the case that there is truth for you and truth for me, but the disagreement instead spells that one of us, maybe both, must be wrong. Another thing that I do, however, in literal-minded discussion with my fellows, is to take it as my task if I do disagree with someone, to identify why I am thinking as I do and find my way to understanding why they think as they do. I need them to pull into view for me the considerations that they are making in coming to think as they do, just as I need to detail to them the considerations that I made in reaching a different conclusion. And in this effort I am also attentive to whether they are being truly rational about their various considerations made, just as they are attentive to whether I am being rational about my considerations made. There is, moreover, an expectation by us both, drawn in from the literal-minded environment within which we are at work, that if we only consider enough together, all the while being reasonable enough in our considerations made, then we will eventually agree. So what it is to achieve agreement is just to let the considerations speak under the unforced force of reason to the conclusions that ought to be drawn. The idea is that there is only the unforced force of a better argument that should settle disagreements between us and impel consent in the end. In these discussions there is no special authority given who I am, no special authority given who you are, and only the authority of the reasons that can be marshalled for or against the positions we’re each defending. Instead, we hold to an ideal of all-things-considered, rationally-best-systematised thinking. We suppose that any literal-minded question admits one unique kind of answer that is right. For we think that if at present you are drawn to answer some literal-minded question one way and I some different way, then as long as you and I consider enough things together, and are sufficiently rational in our considerations made, then eventually you and I will agree. The expected convergence of our minds requires a kind of unity of the ideal. It requires to be
unitary, or wholly one, the ideal of all-things-considered, rationally-best-systematised thinking (God, you might say if you’re in the mood to personify, or hypostatise that unitary ideal). To be oriented in this sort of way isn’t either affordable or sensible if you are in the condition of an oral culture. The oral cultural form is naturally polytheistic, though it keeps the deities closer to the human level and everyday existence, and never builds any clear separation between sacred and profane. Many protagonists, timescapes, and grades between ancestor and deity, are required to fill out the memory-art mythologies. Moreover, it would be harmful to the prospects for maintaining worthy memory arts and thereby flourishing, to have so buttoned-down a conception of what it is functional to say or to think, or so expansive a conception of what might be talked or thought about, as obtains within literal-minded inquiry. All-thingsconsidered reasoning is a travesty of an ideal when considerations come at cost in strain to memory. But there is also this question of rational organisation. Once you have writing, it makes sense to think of ordering ideas rationally, according to what is related to what. (A library is a good example of this: we find books there next to books of a similar subject-matter. There is an overall rational ordering.) If you put outside the body what there is to think about, that presents a reason to organise it into a rational system. But if you depend on arts of memory, matters are quite otherwise, for arts of memory work by playfulness of thought. They work by weaving in emotional connection, human connections to thought. A vaulting investment in reason would be death to the appropriate play of the mind. Literal-minded regard for the biochemistry of the muscular contractions in the mandibles of some particular beetle is pretty alien to human experience. But the ways for shaping thought
to be memorable must weave in human connection to all that is thought about. Narrative myths with largely human protagonists are the mnemonic structures by which information is retained. The mytho-poetic mind needs play, not rational structure. If you pose a question to a possessor of vast memory arts, the answer will come in a way that might surprise you. You, who will find information by going to that place in the library where such information is concentrated − or to a book, making use of its index, or by Google searching − may not be prepared for the form that information retrieval takes among myth-making, oralarts-of-memory oriented people. Ask a specific question about the landscape: upriver from a certain confluence, all the way to the saddle at the top of that valley, how many waterfalls in all are there to climb? To answer it, an adept at memory arts will often fall to reciting myths in poetic form. They must begin at the beginning. Ten or fifteen minutes later they have called forth much other information, but also at last answered the question that you asked. It is possible that for them, there was no quicker way. But then, your search in a library for the same information might take as much time as this, or more. Matauranga Maori is through and through mytho-poetic like this. Yet, let it be emphasised that Matauranga Maori comprises, nonetheless, the knowledge structures of a technologically far advanced people. I will discuss this fact in my third and final article on Matauranga Maori and science (NZST Issue 122). There I will show that the advancements in the knowledge forms of Maori people give the lie to some claims of Karl Popper’s, in passionate defence of Enlightenment ideals, and of what he called ‘The Open Society’. For further information contact: firstname.lastname@example.org
ask-a-scientist createdbyDr.JohnCampbell Why do insects make noises? Daniel Wright, Green Island School Scientist Anthony Harris, an entomologist at the Otago Museum, responded: Some noises result from an insect’s normal functions, such as the whirr of its wings in flight, but very often insects’ noises are produced for communication by specialised structures. Some insects make sounds to attract a mate (e.g. wetas, cicadas, crickets, booklice); to prevent damaging a sibling (e.g. wood-boring beetle larvae from running into each other); or for defence (e.g. a ground weta in its defensive threat posture raises its barbed hind legs high into the air then stridulates loudly as the legs are kicked downwards – which might scare a predator). A longhorn beetle squeaks when held, so the attacker may become alarmed and drop it. Many insects communicate with vibrational signals that can sometimes be heard by humans. The insects produce sound in different ways. WING BEAT. Many small flies attract mates by beating their wings at a certain frequency. PERCUSSION. This refers to vibration produced by the impact of some part of the body against the substrate or by striking another part of its own body. Booklice, although only 2mm long, tap their abdomen on the substrate causing a tiny sound, like the ticking of an old fashioned watch, audible in many New Zealand homes.
If you sit during the evening in a quiet room near a book case in an older house, you will often hear it. STRIDULATION. This refers to vibration made by moving a scraper on one part of the body against a file on another. Male crickets have a file on the second cubital vane of the forewing, and a scraper near the wing margin. The wings are partially raised, then opened and closed so that the scraper of one forewing rasps on the file of the other forewing. The four species of small, black or dark brown native crickets sing during the day with a very high sound audible in grassy places throughout the country. TIMBAL MECHANISM. A timbal consists of thin cuticle surrounded by rigid frames. Vibrations result when the timbal buckles, caused by muscles attached to its surface. The singing of male cicadas in summer is caused by this mechanism. VIBRATIONS CAUSED BY FLIGHT MUSCLES. Some insects produce sounds with their flight muscles when not flying. Honeybees use their flight muscles to make sounds important in social communication – during the bee dance in the hive, a forager gives some of the information regarding a nectar source with flight muscle sounds. AIR EXPULSION. Some moths whistle or squeak by sucking air in through the mouth. Other moths and some beetles hiss by expelling air through their spiracles. For further information: email@example.com
By Margot Skinner, Science Leader Functional Foods and Health, the New Zealand Institute for Plant and Food Research Ltd. As society evolves and the understanding of human health advances, there is an increasing demand for food and food products that enhance health, wellness and lifestyle. A new, global awareness of the role of healthy foods is emerging, and consumers are responding by adapting their approach to health decisions, increasingly seeking an holistic approach to healthcare, and placing greater emphasis on prevention rather than cure. Widespread interest in the possibility that selected foods might promote health has resulted in the term ‘functional foods’. These are defined as foods that encompass potentially healthy products that may provide a health benefit beyond basic nutrition. A food can be said to be functional if it contains a component that benefits one of a limited number of functions of the body in a way that is relevant to health and well-being, or the reduction in disease risk, or if it has a physiological effect. In the broadest sense all foods should be considered as functional as they provide nutrition. Nevertheless, some foods may be particularly beneficial in selectively influencing specific physiological processes that improve the quality of life or reduce the risk of disease. Current food trends are summarised yearly by an industry magazine called New Nutrition Business. The ten key Trends in Food Health and Nutrition for 2009 were summarised as follows: 1. Digestive health: the biggest trend 2. Feel the benefit: What consumers want in recessionary times 3. Weight management: a bright future for foods that make you want to eat less 4. Energy: new markets awaiting to be discovered 5. Naturally healthy and free from: what everyone wants 6. Fruit: The future of functional foods 7. Kids’ nutrition: makes parents’ lives easier 8. Healthy snacks for the ‘me’ generation 9. Ultra loyal customers: niches to help brands ride the recession 10. Packaging innovation delivers premium prices. Fruit is the perfect health food and the perfect health ingredient for functional foods. Fruits naturally ‘healthy halo’ enables it to be sold for its intrinsic healthfulness, but the new ‘super fruits’ such as blueberry, cranberry and pomegranate are positioning themselves with specific and validated health benefits. Likewise, new types of processed foods are being developed. For example ‘PlumSmart’, a plum juice with added ginger, chamomile and chicory root that is on sale in the USA for digestive health provides benefits that may be faster than the traditional yogurts promoted for healthy digestion. Other examples are ‘Anlene’ milk, for bone health, and pomegranate juice for heart health. At Plant and Food Research we are investigating the health benefits of fruit and developing new prototype fruit-based functional foods. This involves input from several health areas, and we are starting to gain evidence that fruit that is vital to New Zealand’s economy, and compounds
present in these fruits (phytochemicals) are important in a number of health and wellness benefit areas. These include: gut health; inflammation and immune support; brain health; physical health; and performance and recovery from over training and exercise. This new area of functional foods requires different types of trained staff who work together in interlocking teams. At Plant and Food Research they include scientists and technicians who have training in biology and the health sciences (including: cell biology; molecular biology; biochemistry; physiology; nutrition; immunology; and neuroscience), chemistry; food science and technology; and food engineering. The food engineers and chemists work on designing ways of developing new types of functional ingredients; the health scientists screen these ingredients in rapid semi-automated tests (high throughput assays) and then pick the best ingredients to test in humans. For example, they may use human blood cells to screen for food components that protect cells from dying, or prevent over production of chemicals, which could lead to uncontrolled inflammation. They also consider interactions between the different compounds in food because we do not eat a single food in isolation, although it may be tested in this way. The screened ingredients are then put together into prototype functional foods by food scientists and technologists. The prototype foods are then ready to be fed to people in controlled clinical trials so that we can determine whether they have a health benefit. Small pilot trials are carried out at Plant and Food Research and larger trials are carried out in collaboration with universities and medical schools in New Zealand and across the world. We also have collaborations with other Crown Research Institutes (for example, AgResearch). For readers who are not familiar with Crown Research Institutes, they were established in 1992, as Governmentowned businesses with a scientific purpose. Each institute is based around a productive sector of the economy or a grouping of natural resources. The focus of Plant and Food Research, the second largest Crown Research Institute in New Zealand, is fruit, vegetables, cereals and seafood. The projects that we work on are sometimes funded by the Government and sometimes funded by industry. The overall aim is generally to add value to New Zealand’s primary produce. Some of the projects that we are currently undertaking include: ‘Healthy Berries’, ‘Fruit Products for Asthma’, ‘Wellness Foods’, ‘Mood Foods’, ‘Foods for Fullness’ and ‘Nutrigenomics’. Nutrigenomics adds another dimension to functional foods by taking into account that people’s genes may affect how they respond to functional foods. Research in this area will lead to foods and diets that are personalised for an individual’s genetic make-up. The area of functional foods will attract students who are interested in nutrition, health and food. They can train through the health science route taking human biology, nutrition and chemistry courses, or go through a chemistry, food science/technology/engineering route, but it is beneficial if they have some understanding of biology and nutrition. For further information contact: firstname.lastname@example.org
foods for health and wellness
sound Science, Music, History, Technology, and Language – the topic of Sound, in all its forms, can be integrated into many areas of the curriculum. The following is only a sample of resources that focus on the science and
technology of sound held by the National Library, and we also have resources on sound for others areas of the national curriculum.
Feel the Noise: Sound Energy by Anna Claybourne. What better way to introduce the concept of sound to a young teen than in the context of a rock concert? Each chapter introduces facets of sound using easy to read text, photographs, diagrams, and insets of interesting facts. Chapters include: Sound check; How loud is that; and The speed of sound. This is another title from the Raintree Fusion series, which uses themes of interest to young people to introduce them to science concepts. Visit the Raintree website for more information at: http://www.raintreelibrary.com
Stupendous Sound by Nadia Higgins. (ABDO 2009). With vibrant colours and simple text this book provides a great introduction to the science of sound for junior primary students. Because it just provides explanations the teacher would need to find practical ideas to support these. A fun presentation, which could be read aloud by teachers to start off a unit of work.
Sound, Light and Radiation by Andrew Solway, published by Wayland (2007). Recommended for Year 7 plus. This title links concepts of sound to everyday examples. Details of historical discoveries and cutting edge applications are scattered through the pages.
Sound: Listen Up! from the Raintree perspectives series ‘Science in your life’, published by Raintree (2006). The chapters of this book, which is suitable for junior primary students, are based on simple questions: What is sound? How do we make it? How do we hear? How do we speak? The book is illustrated with photographs and sprinkled with easy activities the reader can try.
by Melva Jones
Making Waves: Sound: Everyday Science by Steve Parker, published by Heinemann (2005). Using text accompanied by photos, diagrams, charts and graphs the author gives a detailed overview of sound, how it is formed and its properties. Recommended for intermediate to secondary level students.
Sound and Vibrations by Peter Riley, published by Franklin Watts (2005). Another title for Year 7 plus. Observations and experiments on each page encourage the student to explore how sound is made, how it travels and the concepts of pitch and volume. On each page an insert gives information on scientists of the past who have contributed to knowledge and understanding.
Other sources of information on ‘Sound’. Interested in the recording of sound and its history? There are great images of early recording technologies on Discover, a database of multimedia items selected for use by NZ schools. Go to National Library’s website: www.natlib.govt.nz/schools and select Discover from the centre column. Explain that Stuff is a useful website to help students understand how things work. Synthesizers, microphones, hearing aids, voice recognition software, are among the huge number of things that are explained clearly, many illustrated with simple diagrams. Find these at: http://www.explainthatstuff.com/index.html Primary school young scientists will enjoy exploring this site: http://www.primaryschool.com.au/science.php; go to Sounds Great to find mini-sites on a huge range of sound-related topics.
By Jacquie Bay (President BEANZ) BioEd2009 Evolution in Action was a meeting of over 160 secondary and tertiary Biology Educators coordinated by the Allan Wilson Centre on behalf of the International Union of Biological Sciences (IUBS) and UNESCO as part of the Darwin 200 celebrations. The aim of Darwin 200 has been to celebrate the impact of Darwin’s ideas on current scientific knowledge. The programme, coordinated by Giorgio Bernardi, Vice-President of IUBS, has taken the form of a series of scientific symposia in five continents, covering different themes. Hosted in New Zealand by the Allan Wilson Centre (AWC), BioEd2009 was the meeting in this series that focused on Biology Education and was convened by John Jungck (USA) and Peter Lockhart (NZ). Susan Adams of the AWC worked tirelessly to ensure that the programme included sessions that were of interest to the secondary sector. The timing of the symposia to coincide with Darwin’s birthday on February 12th and the cost of the meeting posed major problems for teachers. However, BEANZ worked through these issues with AWC and were very pleased when a reduced registration fee was put in place for secondary teachers. This, combined with 10 scholarships that were jointly organised by AWC, BEANZ and the Conference organising committee, saw over 60 secondary teachers attend the Conference. Six full scholarships including travel, accommodation, teacher relief and Conference registration were provided by the Allan Wilson Centre, Maurice Wilkins Centre and National Research Centre for Growth and Development. All three centres are members of the elite group of seven NZ Centres of Research Excellence. In addition, the Conference organising committee sponsored a further four teachers with registration costs. Scholarship recipients were selected by a BEANZ/AWC panel, and all met with their host organisations during the Conference. Sessions from the Conference are available to watch at: www.allanwilsoncentre.ac.nz/teachingResources.htm. The BEANZ BioEd2009 Teachers’ Breakfast was an extremely successful event attended by over 75 Conference delegates including secondary school teachers from throughout New Zealand, international science educators working in the secondary sector, and representatives from tertiary science organisations. The breakfast provided a significant opportunity for
BEANZ at BioEd2009
secondary science educators attending the Conference to meet together. The breakfast also provided an important opportunity to inform representatives of the tertiary biology sector about the work of BEANZ, and we hope to engage more closely with them in the future. The gathering was addressed by the Hon. Margaret Austin, Vice President of the Royal Society of New Zealand with responsibility for Education, and Chairperson of the New Zealand National Commission for UNESCO. Margaret, who taught science and biology in New Zealand secondary schools for 30 years prior to her election to Parliament in 1984, shared her rich knowledge of the history of development of biology education in New Zealand, challenging delegates to ensure that the education we provide is relevant to the world of twenty-first century children. Finally, the ten teachers who had received awards supporting their Conference attendance (from the Allan Wilson Centre, National Research Centre for Growth and Development, Maurice Wilkins Centre), plus the Conference organisers were recognised. And honorary membership of BEANZ was awarded to the three Centres of Research Excellence. The BEANZ BioEd2009 Teachers’ Breakfast was made possible by the generous support of BEANZ, Canterbury Science Teachers’ Association, Victoria University of Wellington, DairyNZ, Bio-Rad, ESA Publications Ltd, and The Liggins Education Network for Science.
BEANZ Honorary Memberships Honorary Memberships to BEANZ have been presented to the following organisations in recognition of their continued support for Biology Education in New Zealand Schools: • The Allan Wilson Centre for Molecular Ecology and Evolution • The Maurice Wilkins Centre for Molecular Biodiversity • The National Research Centre for Growth and Development. BEANZ looks forward to an ongoing relationship with these three Centres of Research Excellence.
new BEANZ resources Contributors are now being sought for two exciting new BEANZ resources: L4 (Scholarship) Biology and AS 90718 (Biotechnology, Internal), writes Bill van den Ende. For many years, BEANZ has produced a Level 3 Biology Mock examination paper. This is available for schools to purchase at a modest cost. In 2008, we also produced a CD containing mock examination questions from the 2004 to 2007 papers to be used as a source of test, homework and worksheet questions. Biology teachers have suggested that BEANZ should also produce other resources, two of these are: test questions and/or an exam for Level 4 (Scholarship) Biology; and resources and assessments for AS 90718.
BEANZ is now seeking contributions from biology teachers for these proposed resources, which will be edited, collated and distributed (on CD) with the BEANZ examination. Income from the sale of this CD will be allocated as follows: 50% retained by BEANZ to fund projects and activities such as IBO and curriculum development; and 50% will be paid to contributors on a pro-rata basis. So if you are a Biology teacher who has developed relevant materials in your school then we would like to hear from you. For further information and criteria for submission please contact Bill van den Ende (Editor): email@example.com or firstname.lastname@example.org
how up to date is your periodic table? By Suzanne Boniface It all depends on when it was printed – or the publication date of the textbook in which it is found. Periodic Tables printed before 2004 will not include the official names for elements 110 and 111. Some older tables stop naming elements after the end of the Actinides so that elements from Element 104 are just given numbers or named for their Latin numbers. This is probably because, in spite of them being discovered between 1969 and 1974, naming of the elements 104 to 106 was not resolved until 1997. Different isotopes of element 104 Rutherfordium, were discovered by research groups in Russia in 1964, and in the US in 1969. Since the team that discovers an element has naming rights, both teams considered that they should be allowed to propose the name for this element. Eventually, in 1997 the International Union of Pure and Applied Chemistry (IUPAC) accepted the American proposal and named the element after Lord Rutherford. There was also controversy about the naming of element 106. The discoverers suggested the name seaborgium after their colleague, Glenn Seaborg, the American chemist who first prepared many of the elements beyond Uranium. However, in 1994 a committee of IUPAC adopted a rule that no element can be named after a living person. This was fiercely objected to by
the American Chemistry Society as Einstein (Einsteinium, element 99) and Fermi (Fermium, element 100) were alive when these elements were named after them. It was not until 1997 that the name seaborgium was recognised internationally. The discovery of elements 107 to 112 can be credited to the research group at the Society for Heavy Ion Research (GSI) in Darmstadt, Germany. By firing a beam of ions at a target metal it is possible to get the two elements to fuse to create a new element. The new elements created in this way are not stable and decay very rapidly into lighter elements. Elements 107 to 109 were created in this way in the early 1980s and their names confirmed in 1997. Element 110 was first produced in 1993, and in 2003 was officially named darmstadtium, symbol Ds. Element 111, also created in 1994, was named roentgenium, symbol Rg, in November 2004. Element 112 was discovered in 1996 and has yet to be named. These last four elements to be discovered have very short half-lives ranging from 56ms for DS to 0.6ms for Ub (element 112). Further information and the Periodic Table below visit: http://www.gsi.de/portrait/heavyelemets_e.html See page 17 for the periodic table.
Chemistry Olympiad training camp
The team to represent NZ at the 41st International Chemistry Olympiad Competition is: Kevin Jan, Burnside High School (Christchurch); Joel Lawson, Macleans College (Auckland); Jared Lewis, Dunstan High School (Alexandra); and Hyun Sun Roh, St Cuthbert’s College (Auckland). They will be accompanied by Dr Robert Maclagan from Canterbury University, and Dr Jan Giffney from St Cuthbert’s College in Auckland. The Olympiad is being held in Cambridge, England. While only four students are picked for the team, the experience of the training programme is an excellent opportunity to extend our top students and inspire them to pursue science as a career. One student commented that: “This is the first time I have been in the same place as so many highly talented students.” The annual chemistry Olympiad training camp was held during the first week of the April holidays. Twenty of the country’s top chemistry students had been chosen to attend the training camp based on their results in a series of assessments including: tests; assignments; and completion of nine modules of work that took their chemistry understanding and knowledge beyond Year 13 level. During the camp students attended lectures and laboratory sessions, before finally sitting both a three
hour written examination and a three hour practical examination.
Students at the Chemistry Olympiad training camp that was held at St Cuthbert’s College, Auckland during the first week of the April school holidays.
By Paul King
This is a great demo for junior classes, an impressive toy for parents’ evenings and for error calculations it has never been bettered. It’s warm, noisy, accurate and memorable and quite easy to make; or find a pupil with a practical bent who wants to win big at the local science fair!
How to make the flame tube 1. At one end of a piece of galvanized downpipe (don’t try plastic - believe me) epoxy glue a tin lid. Note: Narrower pipes than downpipe do not work anything like as well. 2. Drill a line of about 100 holes, of 1mm diameter, 1cm apart. Make sure at least 10cm is clear at each end, to prevent the burning out of the speakers. 3. Put the gas intake in the middle – it needs a spreader to distribute the gas evenly and avoid having a permanent antinode directly opposite the intake. Note: Haliday & Resnick, 1977, have the inlet closer to the sealed end, which may be better. 4. Stretch a balloon, or a flat bit of plastic glove, over the open end, sealing it gas-tight with masking tape. It’s now ready to go.
Instructions for use 1. Connect up the flame tube to a suitable natural gas supply (either lab or bottle). 2. Tape a speaker over the flexible diaphragm at the end of the tube. 3. Connect the speaker to a suitable frequency generator. (Unilab set to square wave output) 4. Turn on gas, light the flames, check that all holes are alight. 5. Turn down gas until the flames are short, with small yellow tips. 6. Turn on the frequency generator and shift the frequency until the flames dance. Adjust the frequency until the flame tips map out a sine wave with the maximum amplitude you can find. 7. Measure the distance from one antinode to the next. This is λ/2. Record the corresponding frequency. V = f. λ (The book value for the speed of sound in Methane is 430ms-1) 8. Find a new resonance and repeat the measurement.
in air. The resonance frequencies of the chamber will correspondingly be increased by the same 1.5 factor, since the size of the resonance chamber is unchanged and a resonance frequency is proportional to the sound speed. So it should not be surprising that the higher resonance frequencies make the human voice sound `squeaky’, or sound like the cartoon character Donald Duck. A listener usually finds that such higher-pitched `helium speech’ is much harder to understand than `air speech’. There is an important application involving heliumaffected speech sounds, as the deep-sea divers regularly use 80% (or even higher) helium gas in the gas mixture that they breathe during long times in the water. It is important for the divers to be able to converse with other divers and with people on the surface ships. Some efforts have focused on prior speech training for the divers. In some applications, electro-mechanical devices have been constructed so that the diver’s communication signal is altered to one that can be more easily understood by listeners. For further information: email@example.com
ask-a-scientist createdbyDr.JohnCampbell When sucking helium into your mouth, why does your voice go squeaky when talking? Jonathan Gill, Kings High School. Scientist Marlyn Jakub, a physicist at Otago University, responded: To answer this question we must look at the way humans generate sounds. Human voice production can be simply described as a source of slightly pressurised air (from the lungs) that causes air to flow over vibrating membranes (the vocal cords) and these vibrations excite the air in a 17-centimetre-long resonance chamber (the trachea plus the mouth and nasal chambers). By inhaling some helium gas, some of the air in this resonance chamber is replaced by helium, which increases the propagation speed for sounds travelling inside the chamber. Ultimately, this process raises the frequency of resonance sounds emanating from the mouth, as described below. Because helium is such a light gas, it has a sound speed three times faster than sound in air, so if half the air in the chamber is helium, the sound speed will be increased by roughly a factor of 1.5 times the speed
flame tube – the standing wave demonstration
Primary Science Conference 2009 By HelenTrevethan, University of Otago College of Education, andWarren Bruce, University of Canterbury Education Plus. “Is science losing favour among school children?” Ian Milne asked this question in his article entitled Time to bring science alive (NZST Issue 120, pp 32–33). The answer has to be no if their teacher attended the travelling Primary Science Conference: Active Learning: Science Talk from Classroom to the Dinner Table. The Conference is the brainchild of Ian Milne, and was held in four venues throughout the country. It is sponsored by TRCC, the Royal Society of NZ, Ministry of Education, local universities, the NZASE and regional Science Teacher Associations. This year, over 300 primary teachers registered for the Conference. With such a great turnout Ian has truly developed a highly successful formula that includes international speakers travelling the country, presenting workshops and keynotes at each of the four venues. Unfortunately, at the time of writing, Ian is unwell, and we all wish him a speedy recovery. This year there were keynote addresses and workshops from UK science educators Brenda Keogh and Stuart Naylor; Martin Braund and Tanya Shields (York University); Leigh Hoath (Bradford University); and Dan Davies (Bristol University). And Keynote speaker Terry Crooks generously provided his thoughts about science. One of the strengths of these Conferences is the enthusiasm and excitement that the teachers experience. Most workshops include opportunities to explore, play, share ideas and learn. By the end of the Conference teachers had not only received exciting new resources and ideas, they were also determined to use their newfound knowledge and skills. A good example of the wow factor in science was provided by the Steve Spangler workshop – presented
by Warren Bruce and Chris Astall, both from the University of Canterbury – where teachers explored how children could be taken beyond the wow factor into meaningful science. Delegates were presented with a range of activities that could arouse children’s curiosity, encourage children’s questions, allow for investigations and the sharing of their findings. The importance of the Nature of Science was emphasized in all the activities, as was all the Ministry of Education support material. Resources highlighted at the Conference included: Science postcards (www.sciencepostcards.com), LEARNZ; NZCER publication Key Competencies: The water cycle: A science journey; and the Science Learning Hub. One theme which emerged throughout the country was the value of strengthening links with science beyond the classroom; a scheme being developed in Dunedin is a good example of this. ‘Adopt a Scientist’, promoted by the Division of Sciences, University of Otago, is intended to support teachers of Science and students in Dunedin schools by providing participating schools with the opportunity to develop a partnership or relationship with a friendly scientist. It is clear that there are many primary teachers in New Zealand who recognise the importance of science in the NZ curriculum and who are looking for further professional development opportunities. Those who attended the latest Primary Science Conferences indicated this by using some of their holidays to engage in science. Thank you for your commitment to primary science, Ian. Get well soon.
By Jenny Pollock
Because parts of the Astronomy Achievement Objectives (AO) are new, some teachers are unsure what they should be teaching. In Levels 1–5 the AOs are deliberately broad to allow for flexibility and to take advantage of teacher strengths and interests. However, what is more important than anything else is to instil a sense of wonder and curiosity about the Solar System and the Universe. How you get there is a lot less important than helping to develop a fascination with what is out there. Also important is that children develop the awareness that the Earth interacts with, and is affected by, other astronomical bodies. Below are some good websites that I have found on various and, in some cases, unexpected aspects of Astronomy. 1. BBC websites are often good places to start when wanting resources. Science & Nature: Space: http:// www.bbc.co.uk/science/space/ gives links to latest news and lots of ideas for interesting topics to investigate. For games that can be played visit: http://www.bbc.co.uk/science/space/playspace/ . 2. The conspiracy idea that mankind didn’t get to the Moon always leads to robust classroom discussions. For the landing sites of the Apollo missions visit: http://www.google.com/moon/ . There is also a link to the Mythbuster series that showed that the landings weren’t a hoax: http://www.space.com/ entertainment/cs-080827-mythbusters-apollomoon-hoax.html. These can also be found on YouTube. 3. Mars is endlessly fascinating, especially seeing that it is the one planet that humans may land on in the future. For interesting facts and a history of Mars visit: Mars Madness: http://www.space.com/php/ multimedia/marsmadness/. 4. When the international space station is in the sky it is very bright and worth watching out for. To locate the international space station in real time visit: http:// www.heavens-above.com/.
5. On a similar note, the following website gives you the best time to watch solar flares, sunspots, meteor showers and auroras: http://spaceweather.com/. 6. Don’t forget to investigate the moons of the Solar System as well as the planets. Space probes are discovering very unusual phenomena such as: Ice Volcanoes of Titan: http://www.dailygalaxy. com/my_weblog/2008/08/ice-volcanoes-o.html; cliffs between 3–20km high on Miranda: http://www. seasky.org/solarsystem/sky3h2.html; and the weird moons of Saturn such as Iapetus that has one side ten times darker than the other, and Mimas with a crater ¼ the size it: http://csep10.phys.utk.edu/ astr161/lect/saturn/moons.html.
Astronomy in the new curriculum: Levels 1–5
Picture from: http://en.wikipedia.org/wiki/File: Mimas_moon.jpg 7. And for those of us who want to dream, the following website tells how you can travel to see the next total solar eclipse: http://www.travelnotes.org/Travel/ Eclipse99/links.htm.
ask-a-scientist createdbyDr.JohnCampbell Why does a loud noise such as from an explosion cause a ringing sensation in the ears for some time afterwards? Laurent Manderson, Palmerston North Boys’ High School Medical specialist Tim Loads, an audiologist at Wellington Hospital, responded: Explosive noises can cause damage to cells and structures in our inner ear. Our eardrums vibrate when a sound reaches them and three small bones transmit the energy of the sound wave to the inner ear which contains thousands of minute ‘hair cells’. These move in response to the sound wave and convert the mechanical energy of the sound wave into nerve impulses that are sent to the brain. When exposed to severe noise exposure, such as an explosion, the damage to the hair cells can range from
temporary swelling to complete destruction, depending on the intensity of the noise. As the hair cells are very sensitive to mechanical change, it is thought that the ringing sensation in the ears, known as tinnitus, is caused by the swelling or damage of these cells. Thus the auditory nerve becomes stimulated via the hair cells in the inner ear even though there is no physical sound present. If the damage is restricted to swelling, this will subside after a few days and the tinnitus usually diminishes. A temporary hearing loss can also occur. However, if the hair cells are permanently damaged structurally then permanent hearing loss results and the tinnitus may persist. It is important to protect our delicate hearing mechanism from all loud noises and explosions. For further information: firstname.lastname@example.org
qualifying to manage chemical hazards Secondary school science technicians have long been aware that school laboratories store and use chemicals that are classified as hazardous, yet the Hazardous Substances and New Organisms Act (HSNO) was enacted in 1996, with its regulations coming into force on 2 July 2001, writes Helen Roper, Tawa College. In July of 2001, the inaugural National Science Technician’s Conference was held at Victoria University, Wellington, where many of the speakers spoke about how this legislation would affect the management of school laboratories. In January 2007, a Code of Practice for School Exempt Laboratories (COP, the Code), was gazetted. In the introduction to the COP it is acknowledged that ‘… school personnel are unlikely to have the resources to independently comply with the provisions of the Act and Regulations’ (COP 2007, p.2). The introduction states that ‘the intention of this Code of Practice is to provide practical guidance on the steps schools should take in order to comply with the relevant sections of the HSNO Act and Regulations.’ If schools decide not to follow the COP, then the school board is legally required to ensure that the management of the hazardous chemicals fully complies with all sections of the Act and Regulations. The practical management of the Code impacts on the day-to-day work of the science departments’ technician(s). Many technicians expressed an interest in finding professional development training that was a recognised qualification, in the management of chemical hazards. Otago University offered a postgraduate programme in hazard assessment and management which can be studied extramurally. In 2008, the University offered to tailor their introductory paper HAZX401 Management of Chemical Hazards to focus on the management, use and storage of chemicals in the school context. The course is divided into a series of modules. An historical overview of hazardous incidents involving chemicals in both New Zealand and overseas introduces students to some of the issues involved. The use of Safety Data Sheets (SDS), chemical hazards, safe handling and disposal of specific classes of chemicals and emergency management are other topics covered in the course. The HSNO legislation is introduced and an in-depth study of and use of the Code of Practice for schools is required. The paper is a 400-level one and the University assumes that either the participant will have a minimum background in chemical knowledge to Year 13 (or University Level 1) or else have the ability to access the necessary chemical facts from textbooks or the Web. Those who completed the course in 2008, while finding the chemical calculations an interesting challenge, found themselves competent to complete the work required. The University provided tutorial support and extra practice exercises to help with competency. The course takes 22 weeks of study and the University suggests an expected time commitment each week of 5–7 hours.
There is no end-of-course examination. Two written assignments provide opportunities to use the module information in practicalbased questions. The final assessment is a case study, based on a school scenario that made use of all the material taught in the course. Twelve people from around the country enrolled for the course in 2008, and at the end of the course were asked to give feedback. All participants expressed a positive appreciation of the worth of the paper and its usefulness and relevance to their work. One technician was relatively new to the job and appreciated the guidance on what the role of technician can entail. She commented: “The study provided guidance on what my role was beyond getting gear ready and cleaning up, specifically the implications of any errors I make, and the reasons why accurate dilution, labelling and limiting quantities is important in the school environment.” All participants regarded having a working understanding of the Code of Practice and learning how to locate information (for example where to find Safety Data Sheets, navigating the ERMANZ site) as beneficial gains. Many could see how the knowledge gained could be used to support teachers in the safe use of hazardous chemicals in the secondary school setting. A designated Laboratory Manager commented: “Since finishing the paper I have introduced some strategies in our department to make us compliant with the COP and to make the place as safe as we can and easy to manage. As the Laboratory Manager I found the information I got from the HAZX401 paper helpful in doing this.” Although the group taking this course in 2008 were all technicians, the HAZX401 paper is also valuable professional development to teaching staff. The paper is again being offered this year (commencing 1 May, 2009) and it is hoped that technicians and teachers will be able to enrol. However, the cost of the paper, and who pays, is an issue. In 2009 the cost is $967.88. At present there is no professional recognition or increased remuneration for technicians who complete this or other chemical management papers. For a technician to pay the entire cost is a big ask given the present pay scales. Because schools are the beneficiaries from having someone on-site with an understanding of the issues involved in chemical management, they should ideally pay the cost. Most of the 2008 participants were self-funded, although in some cases, not all, the school reimbursed part or all of the cost. Yet there are ways for schools to fund this—two technicians had their course fees met by their school’s staff professional development budget, and another had fees paid from their school’s Health and Safety budget. Further information about the course: http://www.otago.ac.nz/subjects/hazx.html For expressions of interest for the 2010 course email: email@example.com
Reference Code of Practice for School Exempt Laboratories http://www.ermanz.govt.nz/resources/publications/pdfs/COP15-1.pdf
Transformation and Change 5 to 8 July 2009 University of Otago, St David Lecture Theatre Complex This conference will be held in conjunction with the annual meeting of BEANZ (Biology Educators of New Zealand) and will be hosted by the University of Otago. Nationally acclaimed biological scientists will present keynote speeches on the theme ‘Transformation and Change.’ Conference delegates will be able to participate in a wide range of workshops and fieldtrips in anthropology, biochemistry, botany, marine science, microbiology, physiology and zoology. There will also be a focus on updating current thinking on teaching and learning processes for the 21st century learner. For further information contact the conference convenors: firstname.lastname@example.org or email@example.com
9 ChemEd 09 0 Ed
m e h C emEd 09 Ch d 09 E m e Ch emEd 09 Ch ‘Chemistry on the Edge’ 5 to 8 July, 2009
University of Canterbury, Christchurch For further information contact: Richard Rendle Tel: 03 3597275 Fax: 03 3597248, email: firstname.lastname@example.org
National NZIP Conference incorporating
The Science Technicians’ Association of NZ Conference 2009, Auckland ‘Earth, Wind and Fire’ 7 to 9 October 2009
This Conference will appeal to all school science technicians, and also some technicians from tertiary institutions (such as Polytechnics) For further information contact the Convenor, Beryl McKinnell email@example.com
SITUATIONS VACANT NZASE members
Are you looking for a new challenge? Do you want to receive three issues per year of the NZST? Do want to become more involved in science education? Do you want to help develop resources and member benefits?
The 14th National NZ Institute of Physics Conference, incorporating Physikos, the NZ Physics Teachers’ Conference
6-8 July 2009
University&)234 !../5.#%-%.4 of Canterbury, Christchurch
Energise your physics teaching with three days of ideas, stimulation and interactions! For further details visit: www.nzip.org.nz
4– *5,9 s .%,3/. NZASE flagship bi-ennial conference &)234 !../5.#%-%.4 will be held in beautiful .ELSON.
SciCon 3 days of professional development and INSPIRATION to take back to your classroom! Diary these dates now and put it in your budget… PLAN TO BE THERE
4– *5,9 s .%,3/. NZASE flagship bi-ennial conference will be held in beautiful .ELSON.
Then join the NZASE today because we need you!!
3 days of professional development and INSPIRATION to take back to your classroom!
For only $50 pa you can become an individual member and receive all the many member benefits including three issues a year of the NZST.
Diary these dates now and put it in your budget… PLAN TO BE THERE
Join today by contacting us at: firstname.lastname@example.org or www.nzase.rog.nz
www.nzase.org.nz #ALL FOR !BSTRACTS ONLINE LATE
Phone: 03 546 6022 email@example.com