NZASE #119

Page 1

science teacher Featuring: Water Tracking fish Stream biodiversity Fish feel pain too! Reverse osmosis keeps water flowing ACC links world’s oceans Coastal Explorer now online And more... Plus: Rutherford’s Nobel Prize centennial Bringing social issues into the science classroom

Number 119

ISSN 0110-7801



NZ

science teacher 118

Mailing Address: NZASE PO Box 1254 Nelson 7040 Tel: 03 546 6022 Fax: 03 546 6020 email:nzase@confer.co.nz

Editorial 2 From the president’s desk 3 Celebrating Rutherford Nobel Prize centennial 4

Editorial Address: lyn.nikoloff@xtra.co.nz Editorial Board: Barbara Benson, Suzanne Boniface, Beverley Cooper, Mavis Haigh, Rosemary Hipkins, Chris Joyce. Journal Staff: Editor: Lyn Nikoloff Sub editor: Teresa Connor Cover Design and Typesetting: Pip’s Pre-Press Services, Palmerston North Printing: K&M Print, Palmerston North Distribution: NZ Association of Science Educators NZASE Subscriptions (2008) School description Secondary school Intermediate, middle and composite schools Primary/contributing schools

contents

contents

Roll numbers > 500 < 500 > 600 150-599 < 150 > 150 < 150

Tertiary Education Organisations Libraries Individuals Student teachers

Subscription $160.00 $105.00 $160.00 $70.00 $50.00 $70.00 $50.00 $160.00 $110.00 $50.00 $25.00

Subscription includes membership and one copy of NZST, and extra copies may be purchased for $9.00 per issue or $25 per year (3 issues). All prices are inclusive of GST. Please address all subscription enquiries to the NZASE, PO Box 1254, Nelson7040. Subscriptions: nzase@confer.co.nz Advertising: Advertising rates are available on request from nzst@nzase.org.nz Deadlines for articles and advertising: Issue 120 - Carbon 20 December (publication date 1 March) Issue 121 - Sound 20 April (publication date 1 June) Issue 122 - Light 20 August (publication date 1 October) NZST welcomes contributions for each journal but the Editor reserves the right to publish articles it receives. Please contact the Editor before submitting unsolicited articles: nzst@nzase. org.nz. Disclaimer: The New Zealand Science Teacher is the journal of the NZASE and aims to promote the teaching of science, and foster communication between teachers, scientists, consultants and other science educators. Opinions expressed in this publication are those of the various authors, and do not necessarily represent those of the Editor, Editorial Board or the NZASE. Websites referred to in this publication are not necessarily endorsed.

Water Marine biotoxin testing 7 ACC links world’s oceans 9 Biodiversity in stream invertebrate communities 11 Reverse osmosis 14 Recreational fishing impacts on fish welfare 17 Fish tagging 20 Measuring water quantity and quality 23 Stormwater management on the move in NZ 26 International education comment Using socio-scientific issues in the classroom 30 Regular features Education research: Nature of scientific inquiry 33 History Philosophy of Science: Hume on induction 37 Resources: Coastal Explorer 40 National Library 41 Just for starters 42 Science News 22, 29 Subject Associations: Biology 43 Chemistry 44 Physics 45 Primary Science 46 Science/PEB 47 Technicians 48

Photograph of a Snapper in the Leigh Marine Reserve, Auckland. Photograph courtesy of John Montgomery.

1


NZ

science teacher

editorial

119

2

water is the new oil!!! We are moving into an era that will be constrained by access to good quality water, and those who have control over water resources will yield huge economic and political clout. Already there are investors out there buying up water rights. The impact for the average person will be first an increase in food prices as irrigation becomes more expensive; this will be coupled with a huge increase in household and commercial water rates to better reflect the cost of this resource. In the next ten years water resources will be traded in the same way that oil and gold currently are. And water will be the new commodity that everyone will want to invest in. All too often in our science classrooms we use contexts that are far removed from the world of commerce and government. Yet water availability, quality and supply will become global issues during your students’ lifetimes. And those individuals who have access to an unlimited supply of water, or who control it, will want to protect that right. So forget the oil wars of the nineties and early this century; in the coming years it will be water that we will be fighting for. You might think this is farfetched, but I can assure you it is not. Try talking to farmers on the Heretaunga or Canterbury plains about irrigation rights, or mention to them the Resource Management Act. You will soon realise why water is becoming a number one farming issue. Dairy farmers are already paying the cost of the impact of their farming practices on waterways by being forced to treat ‘run off’ and fencing streams. And not surprisingly, the dairy farmers are not happy about this additional cost which they say affects the economic viability of dairy farming. But it’s the consumer who is now paying dearly for this. So, water is a fantastic context to bring into our science classrooms, because the issues impacting on its supply and access encompass social, economic, political, ethical issues as well as science. And in this issue, Mary Ratcliffe’s (University of Southampton) article entitled ‘using socio-scientific issues in the classroom: opportunities and challenges’ guides teachers into bringing such discussions into their classrooms. Also in this issue, we have put together a variety of articles about water: reverse osmosis; tagging fish; aquaculture; storm water; water quality monitoring; biodiversity in stream communities; the Antarctic circumpolar current; and a fascinating article about fish welfare and recreational fishing. Every one of these articles provides a context for bringing water and its issues into the science classroom. It’s time to celebrate the centennial of Ernest, Lord Rutherford’s Nobel Prize, and I encourage you to share his inspirational story with your students.

Talking of inspirational people - I would like to thank all the contributors for their wonderful support of the NZST, without their generosity we would not be able to bring you the quality of articles that are featured in this issue. I know you appreciate, as I do, the commitment required to write such an article. If you can find a few moments, please send them an email of thanks – it is much appreciated by the authors. Over the past year I have indeed been privileged to work with so many passionate scientists and educationists and to all of you - thank you for your ongoing support of the NZST. And I would like to thank the advertisers for their continuing commitment to the NZST. I would like to acknowledge the wonderful support the NZST receives from Jenny Pollock, NZASE President and Rosemary Hipkins, NZCER. It was time to say goodbye to our printers, Stylex Print, in July when the new owners closed the printery and made most of the staff redundant. It was a loss I felt keenly, as I had valued their craftsmanship and expertise. However, there is a silver lining – Philippa Proctor is still our typesetter, as NZASE have contracted her in that capacity, and our new printers, K and M Print have employed the former manager of Stylex Print, Raymond Jones to manage our account. We extend a warm welcome to Philippa Proctor and the team at K and M Print, including Grant Funnell and Raymond Jones…we look forward to a long association with you all. And to Teresa Connor, our subeditor (and non-scientist whom I use as a litmus test for articles – if Teresa enjoys the article and learns something, then I know it’s a good article), thank you for your ongoing support and enthusiastic encouragement for the NZST. Finally, to you all thank you for your readership and support of the NZST. I know that at times teaching feels more akin to the chaos theory than a fulfilling career, but you do make a difference – and remember that every author in the NZST once had a science teacher who inspired them. And I hope this year’s issues of the NZST have been both inspirational and a rollicking good read. Wishing you all a relaxing and enjoyable summer vacation. Kind regards

Lyn Nikoloff Editor


NZ

science teacher 119

Welcome to you all, and I hope that the winter hasn’t been too harsh in your area. Firstly, congratulations to the organisers of this year’s successful SciCon. Te Papa was a great venue and Wellington is always a fun city to visit. The keynote speakers, as usual, gave us much to think about, and the workshops were very interesting. In this report I want to talk about the alignment project that has been directed by Cabinet and is being coordinated by the Ministry of Education and NZQA. As many of you will have realised, all subject associations have been contracted by the Ministry of Education to align subject Achievement (AS) and curriculum-based Unit (US) standards with the relevant curriculum learning areas. Levels 1, 2 and 3 NCEA standards are to be aligned with Levels 6, 7 and 8 of the new curriculum. The curriculum and the aligned standards are to be implemented in 2010. It is important that teachers realise what has been decided by the Ministry of Education and NZQA. They have determined that: • there are still to be twenty-four Achievement Standard credits per subject per level • there are to be no more than three external standards assessed in a three-hour examination session. NZQA has determined that a minimum of one hour must be available for the assessment of any standard, however many credits, so that candidates have time to demonstrate their ability without undue time pressure. (Apparently there has been data analysed which has shown that candidates in exams do more of their papers effectively if there are only three – with more papers less is done with a subsequent fall off in pass rates) • one credit should take about ten hours of learning, practice and assessment for an average candidate – which is about five hours of class time • there should be very little overlap of any standards with other achievement or unit standards • achievement standards still make up the twenty-four subject credits but relevant unit standards will be retained if they align with the curriculum and don’t overlap • the retained unit standards can have Achievement with Merit and Excellence if the standard warrants it • Achievement Standards can be changed from internal to external assessment, or visa versa, if there is a more valid way to assess a particular standard. At the moment the Ministry of Education and NZQA aren’t allowing there just to be ‘standards,’ but this may change in time. It is intended that this alignment will: • address any duplication and credit parity issues between standards plus consistency, fairness and coherence • include subjects that don’t have AS directly linked to the curriculum, e.g. Agriculture and Horticulture • will develop recommendations for changes to the

standards and consult widely on these. All AS and US standards: • must be needed and must not duplicate another standard • should have clear and achievable learning outcomes – what candidates who have achieved the standard will know or be able to do – which can be expressed as concepts, knowledge, skills, or competencies • must balance flexibility and coherence. A flexible standard is one that is not too context-bound but can be used across a variety of contexts. Coherence means that a standard should be wellintegrated with other standards to form integrated assessment for a course • incorporate key competencies • will have assessment conditions incorporated into second tier material that will accompany the standard. All standards will have this, regardless of whether they are new or not. This means that it will be clearly stated, for example, how many reassessment attempts a student is able to have for an internal assessment and whether they can go not only from Not Achieved to Achieved but also from Achieved or Merit to Merit or Excellence. (This is a positive move that should address key areas of concern among many teachers). The Ministry and NZQA have also indicated that they would like to see content reduced in Science standards and the Nature of Science skills emphasized more. Science faces several challenges in this alignment. Firstly, we have to work within the constraints set by the Ministry and NZQA. Secondly, the Nature of Science strands are the over arching strands and consequently must become more important in assessment. Thirdly, what is the place of the Planet Earth and Beyond strand? At the moment we are working through these with the Ministry and NZQA. I am sorry that I don’t have anything definite to report at this time. We are very aware that teachers are concerned about more changes and the impact on their workload. We have been assured that resource material will be developed next year and we are hoping that this arrives in schools in time for teachers to prepare themselves. We also wish to assure teachers that they will still be able to teach the separate sciences of Biology, Chemistry and Physics and also develop novel courses to suit their students. Finally, standards must support good teaching and learning and foster community and sector confidence. There will be widespread consultation on whatever proposals are developed. Please make sure that you and your departments answer any questionnaires and attend any consultation days – look for these on our website: www.nzase.org.nz later in the year. Good luck for the rest of the year. Jenny Pollock President

fromthepresident’sdesk

impacts of the alignment project

3


NZ

science teacher

celebratingRutherford

119

4

Nobel Prize centennial This year marks the centennial of Rutherford’s Nobel Prize, the first for a person educated in New Zealand and the first for a failed schoolteacher. While he is well known outside of this country, knowledge in New Zealand of his early research is limited and so the author of Rutherford Scientist Supreme, Dr John Campbell writes this timely account in celebration of Rutherford’s Nobel Prize. The young Ernest Rutherford As a boy growing up on a farm in rural New Zealand, Ernest Rutherford, or Ern (as he was known in his family), learned many practical skills. His first chemistry experiment involved blasting powder (readily available on a Foxhill farm with tree stumps to be removed) whereby he made a small cannon out of the tube from a brass coat rack. Well charged with powder, and with a marble as a projectile, the flimsy device blew up on first use. He was unhurt, but had fate gone the other way − and on several other occasions − the world would never have heard of Ernest Rutherford. By luck, his father’s flax milling endeavours took the family to Havelock, where Ern came under the influence of the village schoolteacher, Jacob Reynolds, the first of his four teachers of influence. Reynolds, a lawyer by training, taught Ern (and other paying students) Latin after school, which would aid his entry into secondary school and university. In the 1880s education was compulsory to the age of twelve and free to the age of fourteen. Secondary schools were private and expensive, and the Rutherford family could not afford to send him to Nelson College. His only hope was by winning a scholarship, which he did on his second attempt and only because Edward Pasley, eight months his junior, crashed in English. Pasley had beaten Ern in geography and history and they had tied in maths. (Pasley became a travelling salesman in Palmerston North). Had Pasley not ‘crashed’ in his English exam, Ern might have accepted the offer made to him of a cadetship in the civil service (he had been placed fifteenth of the two hundred and two candidates for the 1886 Junior Civil Service Examination). In 1887, fifteen-year-old Ern entered Nelson College at the fifth-form level (as befitted his age) where he came under the influence of William Littlejohn, a good mathematician. In science Littlejohn was just a page or two ahead of the students. Ern regularly won prizes (and more money for fees and boarding) in modern languages and literature. In 1888, he passed the matriculation exam for the University of New Zealand, but because he had not been awarded a Junior Scholarship he could not afford to attend university. So he stayed on at Nelson College for another year (1889) during which time he rose to Sergeant in the Cadet Corp, was the lock in the rugby team, and also head boy (the Dux, hence his youngest brother’s taunting him with ‘quacks,’ which ceased after a quick hiding).

A student at the University of New Zealand On his second attempt in 1889, Ern was awarded one of the ten Junior National Scholarships to the University of New Zealand. At Canterbury College, he came under the influence of the professor of mathematics, Cook, who drilled his classes, plus the professor of chemistry and practical physics, Alexander Bickerton, who taught Ern to think and inspired him to enter research. All BA students at that time studied equally in six subjects, four being examined after the second year and the other two in the final (third) year. Mathematics and Latin were compulsory and so he chose applied mathematics, French, English and physics as his other four subjects. It is interesting to note that at this time the BSc degree, which didn’t have compulsory Latin, was still relatively new, and that BA students could only study two science subjects. Ern was a good student but only on a par with others such as Willie Marris, who beat Rutherford in mathematics. Willie was a classics scholar who, after graduating with a BA, entered the Indian Civil Service exams and rose to be Sir William Marris, Governor of Assam. Another fellow student, Apirana Ngata, was the first Maori to attend Canterbury College where he studied law. He became a politician, was knighted, and his portrait is on our $50 banknote. While at university, Ern also won the Senior Scholarship in mathematics that allowed him to stay on for another year (1893) during which time he took honours (Masters) in both mathematics and in experimental science. By this time he was boarding with a widow, Mary Newton, whose husband had drunk himself to death. Mary was none other than the right hand woman to Kate Sheppard, the leader of the Woman’s Christian Temperance Union, an organisation which realised that the only way women would have a say in the control of alcohol was if women had the vote. In 1893, the women of New Zealand were first granted the vote, the first country in the world to do so (Ernest Rutherford was old enough to be on the electoral roll). Thanks to his lodgings, he had an insider’s view of this momentous occasion. Candidates entering for honours in physical science had to enter the exam room with a note from their professor that they had carried out original research. So Ern had to find a research project. Professor Bickerton had developed a theory of astrophysics (the partial impact theory) which he thought could explain all astronomical observations such as Nova, and indeed life itself. So he suggested that Ern study the electrical synthesis of the nitro-compounds of hydrogen, carbon and oxygen. (Was this to do with the origins of life? In the 1950s Miller and Urey attained world fame in carrying out such experiments). Ern declined because he didn’t have a chemical background. Instead, he chose to extend an undergraduate experiment measuring the magnetism of iron to study whether the results also held for rapidly-cycling


In 1894, his second year of research, Ern extended his magnetic research to even higher frequencies using heavily damped oscillations − firstly from a discharging capacitor and later from a Hertzian oscillator − to reach even higher oscillating current rates (Refer Figure 2). During this work he invented a simple device for detecting the passage of a current pulse of very short duration, down to about one two-hundredthousandths of a second. This involved placing a steel needle in a small coil in the circuit and using a sensitive magnetometer to detect that the magnetism of the needle had changed. He slowly dissolved the surface of his iron needle to show that at high frequencies only a thin surface skin was magnetised and the magnetism direction reversed lower in this layer. (Refer Figure 3)

NZ

science teacher 119

celebratingRutherford

magnetizing fields. He was inspired by Nikola Tesla who had come to world notice in August of 1893, through publicly demonstrating the transmission of electrical power without wires. (A discharge tube glowed when held near his high-frequency, high-voltage, transformer). Alternating currents were the ‘high technology’ of the time. Ern made a mechanical device which could switch an electric current off then on within one hundredthousandth of a second. With this he eventually showed that iron exhibited quite appreciable magnetic viscosity in rapidly changing fields. The brilliance of Ernest Rutherford as a researcher was evident from this first year of research, during which he was mostly self-taught, as demonstrated by the skill and thought that went into the construction of his timing device. (Refer Figure 1) Employment now loomed. But there were few jobs for physical scientists in New Zealand, except as a government analyst in one of the main cities keeping miners and the food industry honest (and laying the earliest groundwork for CSI TV programmes). Despite applying on several occasions to be a schoolteacher, Ern missed out on permanent employment at both New Plymouth Boys’ High School and Christchurch Boys’ High School. He did relieve for a term at the latter. The only surviving account is from a boy in a junior class, who wrote that Ern couldn’t control the class and was a bit advanced for them. Understandable when you consider that his only previous teaching experience was tutoring a few Canterbury College students in maths and physics. In 1894, an Exhibition of 1851 Scholarship offered biennially for one graduate enrolled at the University of New Zealand, was available. So Ern returned to Canterbury College and enrolled for a BSc. This newish degree allowed students to avoid Latin. He needed two more science subjects to add to those of his BA, so he studied chemistry and geology. (This forms the basis for a good Trivial Pursuit question. Which two subjects did Ernest Rutherford take for his BSc degree?)

Figure 2: Decay of oscillation.

Figure 3: By dissolving away the needle surface Rutherford showed that the magnetism was in a thin surface layer. With regard to the 1851 Scholarship, an optimist would say Ern came second; a pessimist, last. For there were only two candidates and the nomination was awarded to the other, who was doing much more useful research for an industry of national importance. James Maclaurin had developed, and published, the cyanide method of extracting gold from rocks, a method still used today. Maclaurin had to decline the nomination because his job as government analyst in Auckland couldn’t be held for him during the two years he would have to be away. Maclaurin went on to lead the old Chemistry division of the Department of Scientific and Industrial Research. (His brother became President of the Massachusetts Institute of Technology). So Ern was awarded the scholarship; not exactly by default, as his work was regarded as excellent. The scholarship allowed the holder to travel anywhere in the world to research in a field important to the Nation’s industrial interests. (And this was another first, which continues to this day, with scientists still having to lie about the national importance of their research in order to get funding).

The Cambridge years

Figure 1: Rutherford’s brilliance as a researcher shows in the detail of his timing device.

In 1895, Ern took up his scholarship with JJ Thomson at Cambridge University’s Cavendish Laboratory. He had chosen there because JJ had written one of the electrical books Ern had used in his research. And within five months, Ern held the world record for the distance over which a wireless electric-wave was detected − half a mile. This record came about because of two events. In order to determine how sensitive his detector was, Ern had shifted his magnet detector of short current pulses

5


NZ

science teacher

celebratingRutherford

119

from the transmitting side of the Hertzian oscillator to the aerial side. And an Irish friend had told Sir Robert Ball, the director of the Cambridge Observatory, of Ern’s experiments. Ball, the scientific advisor to the Irish Light Association, which looked after the lighthouses around the Irish coast, hurried to advise Ern that if he could get the distance to a reasonable one he would solve the problem of enabling ships to detect a lighthouse in fog. Ern wrote to his girlfriend back in New Zealand that fame and fortune awaited. Meanwhile, JJ sounded out financiers, who concluded that an impossibly large investment would be needed to commercialise wireless telegraphy, because telegraph lines on land and undersea were already extensive. But it was two other events which saw Ern abandon wireless telegraphy work. (Had he not done so, he would have become moderately well known in technical circles but not as famous as he is today). The first event was that JJ Thomson, realising how good Ern was, invited him to join Thomson’s own research into gaseous conduction of electricity. So from 1896, Ern helped JJ with experiments on why putting an electrical discharge through a gas turned a good electrical insulator into a good electrical conductor. (In 1897, JJ announced the discovery of the electron, the first object smaller than an atom. Ern was an immediate convert to sub-atomic particles and this became his life’s work for which he has enduring fame). For these experiments, Ern initially used ultraviolet light to ionise the gases he was studying. But whilst he was still carrying out his long distance experiments, two accidental discoveries were announced that changed physics forever, which was also the second event that caused Ern to drop wireless telegraphy as his main field of research. Roentgen in Germany accidentally discovered X-rays and Becquerel in France accidentally discovered radioactivity. While X-rays went into immediate service worldwide in medical physics, radioactivity was a lesser curiosity. Ern used both to ionise his gases, but quickly changed to trying to understand the peculiar nature of radioactivity. Very quickly he showed that ionising rays from radioactive materials seemed to be of two sorts. One, which he called alpha rays, was highly ionizing and easily stopped; whereas the other, which he called beta rays, wasn’t as ionizing and had more penetration. Although his scholarship had been extended for a third year, his time at Cambridge came to an end. Ern had hoped to obtain a Fellowship to allow him to stay on. But Cambridge had a rule aimed at appeasing its own graduates, who saw non-Cambridge graduates such as Rutherford as a threat to their own chances of a Fellowship. The rule meant that non-Cambridge

Cartoonists invented metal clothes for those who didn’t wish to appear nude in X-ray photographs.

6

graduates couldn’t apply for a Fellowship until five years had elapsed, compared with the three years for Cambridge graduates. So Ern left Cambridge. The following year Cambridge changed that rule because they knew what they had lost.

Nobel Prize in Chemistry In 1889, Ern was appointed to lead physics research at McGill University in Canada in order, as he was told, “To knock the shine off the Yankees.” And lead he did. He quickly found that radioactive thorium gave off a radioactive emanation. He had discovered radon. He gave his first research student, Harriet Brooks, the MSc topic of using diffusion to determine the atomic mass of the emanation. This put him on track that radioactivity was the spontaneous disintegration of some heavy atoms into slightly lighter ones, with the emission of rays/particles of enormous energy. He was the first to produce the growth and decay curves for radioactivity, which now feature on the New Zealand hundred dollar banknote. (Refer Figure 4)

Figure 4: Graph of the exponential growth and decay curves. Ern was required to carry out his own chemical separations until he was joined by a specialist chemist, Frederick Soddy, in April 1901. They worked out several of the radioactive decay chains. Ern used these decay chains and curves, together with the amount of helium gas in a mineral that contained radioactive elements, to date the age of minerals and the Earth. Later, when it was realised that the final decay product in the chain that had started with uranium was stable lead, he used the uranium/lead ratios to date minerals. He left McGill for Manchester in 1907, but not before being nominated for a Nobel Prize, which was awarded in 1908 − in chemistry. As Ern told his mates it was the quickest transformation (physicist to chemist) that he had met. The citation was “for his investigations into the disintegration of the elements, and the chemistry of radioactive substances.” It should be noted that his was the first Nobel Prize awarded for research in Canada. Ern was to have many more great discoveries, but that is another story. Suffice to say that the New York Times eulogy in 1937 sums up his legacy, ‘It is given to but few men to achieve immortality, still less to achieve Olympian rank, during their own lifetime. Lord Rutherford achieved both.’ For further information contact: john.campbell@canterbury.ac.nz and visit: www.rutherford.org.nz


NZ

science teacher 119

When the government banned the export of all New Zealand shellfish in 1993, they couldn’t have known that what seemed like a tragedy at the time, would turn out to be a blessing in disguise, as Donna Harris, Executive Administrator at Cawthron explains: With the discovery of the harmful algal bloom dinoflagellate Gymnodinium cf mikimotoi the shellfish industry and the government pulled out all the stops. They imposed a national ban on exports and harvest, sent out helicopters to take water samples around the country and spent over $3 million in testing at the same time as millions of dollars in revenue was being lost. It was a valid reaction considering that contaminated shellfish had the potential to destroy our safe, clean and green reputation overseas, and the industry. Although it would later turn out that only two regions were affected by the toxin, at the time the blanket ban was the only option because of the lack of information available. Today, the mass ‘close down’ would not happen for two reasons. First: regular toxin testing has been introduced; and second: Cawthron has developed new testing methods to replace the ‘mouse methods’ that use Liquid Chromatography - Mass Spectrometry (LCMS) and has proven to be more accurate and faster.

Wake-up call Prior to 1993, routine marine biotoxin testing was non-existent in New Zealand, even though there was an awareness of the potential threat of toxic blooms. Cawthron Senior Scientist, Lincoln Mackenzie points out that industry began to fund research into biotoxins in the late ‘80s. “We knew there were toxin-producing species in New Zealand waters well before ‘93. The algal bloom that practically wiped out the salmon farms in Big Glory Bay, Stewart Island in January 1989 was the wakeup call to the industry that these types of problems could occur. The industry-funded Cawthron phytoplankton monitoring programme dates from this incident.” Dr Lesley Rhodes, a Senior Scientist and Leader of the Foundation for Research Science and Technology (FRST) Seafood Safety Programme, insists that it was the impact of the big ’93 scare that created a sense of urgency for the industry and regulators to protect the burgeoning shellfish sector. “We were given the opportunity and resources to find ways to pre-empt problems. Our task was to find out what toxins we have in New Zealand waters, and to carry out toxological studies to determine which compounds needed to be regulated for and which are of no risk to humans. This has led to the current situation where we have a prioritised list of compounds we’re monitoring to ensure our seafood is absolutely safe to eat.” With the discovery of new toxins, and the potential for more, in our waters, the industry and the government needed to ensure more regular, accurate monitoring. The first marine biotoxin testing programme in this country was started in 1993, using the existing world standard method of mouse bioassays where mice reactions to shellfish extracts containing suspected toxins are measured.

water

marine biotoxin testing However, as Cawthron Scientist Paul McNabb points out, the mouse bioassays were no longer working. “Mouse tests were producing a large number of positives which didn’t seem to be associated with any toxic algae – the so-called false positives. Yet, if the test showed as a positive, areas were closed down causing big disruption, often without toxins present. Also, the mouse bioassays are either negative or positive – there are no shades of grey or degrees of toxins present, and that was a huge problem for industry who wanted to know when there were low levels of toxicity so they could take actions in advance.” The mouse tests were also slow, taking up to five days for results. The list of problems with the tests and the increasing discomfit about using mice made it imperative that something had to change.

Search for better tests Because the world standard wasn’t good enough, Cawthron and industry decided the only way to move forward was to start developing a better alternative. They explored adapting LCMS technology. The highly sensitive LCMS equipment allows the detection and characterisation of organic molecules. It combines High Performance Liquid Chromatography (HPLC) systems of separation with the detection power of a mass spectrometer, which leads to speedy identification of a wide range of toxins. The technology is a common tool for research, and while one or two researchers overseas had tried LCMS for testing, Cawthron became the first in the world to develop LCMS systems for high volume routine testing of biotoxins. Cawthron’s LCMS testing is setting the pace globally for industry standards. The Marlborough Shellfish Quality Programme (MSQP), which is the largest shellfish grower management centre in the country, collects and tests approximately 6,000 water and shellfish samples each year. The programme operates 365 days a year − such is the importance of ensuring that marine biotoxins are being tested for. MSQP Manager Helen Smale says it’s raised the credibility of the whole industry. “We’re light years ahead from ’93. New Zealand is now a world leader in biotoxin management. Cawthron’s LCMS testing has allowed us to ensure market credibility. The secret is that we have a partnership. Here, we work differently from the rest of the world. Our industry pays for the biotoxin monitoring, not the government, which means we have a higher awareness and involvement in managing the solutions than when governments pay. In this case the industry, the researchers and the regulatory bodies worked together. This allowed us to think outside the square, and come up with solutions that were appropriate and robust.”

Investment pays off It was a key decision by Cawthron to invest in LCMS testing. CEO from 1989–2006, Graeme Robertson says they needed a guaranteed market for the testing. “When we did the sums we worked out it didn’t need to cost more than existing testing − as long as there was a high enough volume of samples − so we signed up contracts with grower management centres who charged a

7


NZ

science teacher

water

119

levy to growers and used that money to carry out the surveillance.” The LCMS machine was installed at Cawthron in December 2000, but it took over a year before Cawthron was testing its first ‘fee paying’ sample, and four years before it was finally validated and accepted internationally. Graeme Robertson says a transparent partnership with industry made the investment worthwhile. “It took us four years. But we set it up the right way. We had the right instrument, the right people and the right relationship with the customers – the industry. To make the relationship committed, we promised and delivered complete financial and scientific transparency, sharing all of the costs structures, all the way through.” MSQP’s Helen Smale says it was a big step for everyone involved. “We had the eyes of the world on us. It was a big step for Cawthron because they were investing in technology where there wasn’t a 100% guarantee that it would be accepted. The regulators had to develop new validation processes for the testing, because there wasn’t anything existing available in the biotoxin sphere anywhere else. Industry took a risk as well, because the new methodology had to be accepted by our trading partners, or market access was at risk. So, all the parties took a risk, but it was a calculated risk, and we made sure we did everything absolutely properly − i’s dotted and t’s crossed so it would stand up to scrutiny internationally.”

The validation challenge In 2001, Senior Scientist Dr Patrick Holland joined the team to develop testing methodology that would meet international regulatory standards. This became the biggest challenge. “We were frustrated because we knew we had a better test and we wanted to be able to use it and apply it to the whole industry and solve all their problems, but MAF and New Zealand Food Safety Authority were very cautious about allowing us to do that and probably quite rightly so.” While that process of validation was continuing, with the support of individuals such as Teresa Borrell from Sanfords and Helen Smale (MSQP), the regulators allowed shellfish screening tests from 1994, as a research programme to run parallel to the mouse tests. It was the first step. In conjunction with algal research, the testing began detecting toxic varieties of phytoplankton which had never been discovered here, and Graeme Robertson says the new programme became more and more accepted. “This was pretty advanced stuff because most other countries weren’t doing this as they only tested for domestic use. We were testing for export which was much more difficult and effectively a technical barrier to trade.” Finally, after four years of refining the testing, proving and re-proving the validity of the methodology, routine biotoxin testing using LCMS was accepted by our trading partners in 2003. The shellfish industry was delighted. “It changed how we managed biotoxins,” says Helen Smale. “It’s now a dream because the sensitivity of the LCMS testing is so much more refined than the mouse bioassay tests. We can see toxicity coming up from low levels, and we can put measures in place to stop harvesting well before they reach regulatory levels, which means we don’t have recall. That translates into far more confidence in our markets and no lost income.”

Research benefits 8

There has also been a huge scientific benefit. Dr Holland says the LCMS testing has been a boon for research.

“It’s revealed around 20 new species of phytoplankton toxins so far. LCMS gave us a lot of information very quickly—information on blooms that hadn’t been available before. Lesley and Lincoln already had a lot of knowledge of the micro-algae, but now they could start to test them. It was very revealing about what was going on because LCMS allows you to identify the toxins when often there’s a mixture. And we were able to measure the amounts, so the two go together really well. Being sure of what you’re seeing is important and knowing the amount relates to the regulatory requirements.” Graeme Robertson says if you made a list of all the algal toxins that have been discovered, New Zealand has almost all of them, and our list is twice as long as any other country in the world. “That’s because we’ve been looking. I mean it’s partly because we also have a range of habitats from sheltered to exposed waters and a temperate climate, but lots of other countries do too. The first time we found them we thought they had been imported into New Zealand from somewhere else, but we find now that lots are native and actually seem to be endemic to New Zealand.” Today, Cawthron Scientists Paul McNabb and Dr Holland sit on international advisory expert panels for both the US and European regulators which recognise that Cawthron scientists are world experts in biotoxin testing application for LCMS, and are training the regulators.

Increasing opportunities It’s now eight years since the installation of the first LCMS machine, and Cawthron has just invested in a second machine to increase capability. It will provide them with back-up and an increased capacity for both testing and research. Paul McNabb says when they started there was a lot of pressure because of the huge investment. “As a business, it’s really matured to a point where it’s financially sustainable, and it’s a standalone unit within Cawthron.” Cawthron’s current CEO, Gillian Wratt, says Cawthron is proud of its achievement with biotoxin testing, and the recent investment in the second LCMS machine is affirmation that this science is here to stay. “Cawthron has gained international respect for this work. We made the world standard by not simply doing what the rest of the world was doing but by challenging it and looking for better solutions alongside our industry and regulatory partners.” She says the LCMS testing has grown into an international commercial operation. “We’re doing work for the Australians and we’ve acted as consultants to other overseas aquaculture industries.” And the research is expanding as well. The Cawthronled FRST Seafood Safety Programme which includes AgResearch, Crop & Food and ESR, was set up in 2007, and is working with industry and regulators to develop a comprehensive approach to seafood safety. Cawthron is also exploring real-time remote monitoring as a prospect, and is collaborating in research with AgResearch to define more closely the true potential for human harm from the toxins. The joint toxicology work is also underpinning the setting of regulations worldwide for toxic compounds. This investment in research shows how marine biotoxin testing has cemented its key role within New Zealand aquaculture in only fifteen years. Brought about through a successful professional partnership between the industry, regulators and the science teams, it has enabled innovative research solutions to become vital, everyday seafood safety tools. For further information contact Donna.Harris@ cawthron.org.nz


NZ

science teacher 119

water

ACC links world’s oceans The Antarctic circumpolar current is a vital link between the world’s oceans as Jenny Pollock, NZ Sciences, Maths and Technology Fellow 2008 at NIWA; and Mike Williams, National Institute of Water and Atmospheric Research explain: As New Zealanders, we tend to think that we are at the edge of the world, with nothing but windswept sea between us and Antarctica. We are very aware of our dynamic landscape, but often don’t realise that we are also in the middle of vast, restless oceans, through which major currents that control the world’s climate flow. An ocean current is like a huge river within the ocean, responsible for the large-scale transport of ocean water and with it heat, salts, dissolved gases, nutrients and marine life. The primary driver of ocean circulation is solar radiation, which sets up the other drivers of the ocean, wind and density gradients. Surface currents, which are generally no deeper than 10% of the ocean’s depth, are driven by wind. Deep currents are driven by gradients in density, density being a function of salinity and temperature. The Earth’s spin, the Coriolis Effect, and the topography of the ocean floor strongly affect the direction in which currents flow. Just south of New Zealand, in the most inhospitable part of the world, flows an ocean current that completely circles the globe – the cold Antarctic Circumpolar Current (ACC). This is a huge current formed by persistently strong westerly winds that are nicknamed the roaring forties, furious fifties and screaming sixties by sailors. These winds transfer large amounts of momentum and energy to the current. The ACC flows eastward around Antarctica and connects the Atlantic, Pacific and Indian Oceans. It transports 110 - 150 x 106 m3s-1 of water, where 1 x 106 m3s-1 is roughly equal to all the water flowing out of all world’s rivers. Unlike other major currents, the ACC reaches from the surface to the bottom of the ocean. It is as deep as 4000 metres and as wide as 2000 kilometres and consists of a series of linked flows affected by underwater topography.

Figure 2: The Macquarie Ridge and the Campbell Plateau showing how the ACC and DWBC are diverted. Courtesy of Lionel Carter (2008).

Figure 1: Antarctic Circumpolar Current and the Deep Western Boundary Currents. Courtesy of Lionel Carter (2008). Key: ACC: Antarctic Circumpolar Current DWBC: Deep Western Boundary Current SAF: South Antarctic Front SB: Southern Boundary of the ACC The ocean floor is not flat and featureless, but contains similar landforms to those found above the surface. Mostly the ACC flows unimpeded, but underwater formations such as ridges and plateaus act as barriers that deflect and alter the flow. Key areas where the flow of the ACC is affected are the Drake Passage between South America and Antarctica, the Kerguelen Plateau in the Southern Indian Ocean, and south of New Zealand along the Macquarie Ridge. When the current has to get through small gaps − as found in the Macquarie Ridge − it flows faster and downstream of the Ridge and collapses into a series of large eddies. These eddies are the oceanic equivalent of atmospheric weather systems with horizontal scales of 100s of kilometres and vertical scales of hundreds of metres. These features can be seen by satellites because the warm eddies increase sea surface height, and cold eddies decrease it.

Figure 3: The moorings used to gather data at the Macquarie Ridge.

Figure 4: The moorings being brought on board the Tangaroa.

Photograph courtesy of Dr Mike Williams and NIWA.

Photograph courtesy of Dr Mireille Consalvey and NIWA.

9


NZ

science teacher

water

119

10

By taking vertical profiles of the temperature and salinity of the ocean, oceanographers can map where water has flowed from. They do this by comparing the properties of the water of interest with the properties of different surface waters. The properties of each water mass are set in the formation region through a combination of surface heating or cooling, and evaporation or dilution from rain or snow. This process is fairly consistent from year to year. Once each water mass leaves the surface, its properties remain constant apart from some slow mixing with neighbouring water masses. Because different water masses have different densities the denser ones flow under the lighter water masses. For example, the densest water masses are formed by surface conditions found in Antarctica that cause the water to become very cold and salty. Horizontal boundaries between water masses are called fronts. Across each front there are dramatic changes in the temperature and salinity over a relatively short distance. For example, in the Southern Ocean, the Subantarctic Front is the boundary between salty, warm water to the north and a region of low salinity water which stretches to the Polar Front. South of the Polar Front the water masses are set by interaction with the cold atmosphere and sea ice. The circumpolar Subantarctic Front and the Polar Front are also important for the ACC, as they are associated with most of the ACC’s transport. (Refer Figure 1) The importance of these fronts for the currents is because of the changes in temperature and salinity across the fronts set up the density gradients that drive ocean currents, particularly in the deep ocean. The ACC has layers according to the density of the water masses. The upper part has oxygen-poor water from all the oceans. The middle part is composed of a mixture of deep water from all oceans. The lower and deeper part contains water with high salinity from the Atlantic mixed with salty water from the Mediterranean Sea. Below that is the very cold dense water from the North Antarctic. As the different water masses circulate around Antarctica they mix with other water masses with similar density. The current is effectively mixing and then redistributing deep water from all the oceans. The ACC has a profound influence on the world’s climate because it is part of the global thermohaline circulation, which is driven by the sinking of cold, dense water around Antarctica and the North Atlantic. This cold, dense water is mainly formed as a result of sea ice formation at the edges of Antarctica, because as sea ice forms by sea water freezing most of the salt is expelled as brine, increasing the density of the water below. Density increases further by mixing with deep saline waters that have risen to the surface south of the ACC. These waters lose heat to the atmosphere, cooling even further. This very dense water sinks to the bottom of the ocean and flows northwards, joining with the ACC. Branching off the ACC are Deep Western Boundary Currents (DWBC) that carry this deep water into the Indian, Atlantic and Pacific oceans, travelling 2 –5km below the surface. (Refer Figure 2) The largest of the DWBC flows eastwards past New Zealand, around the Campbell Plateau, past the Chatham Rise along the Kermadec Trench and into the North Pacific. Eventually this water rises to near the surface and moves as a warm equatorial current to be joined by water from the Indian Ocean. It therefore increases in flow as it moves westward into the Atlantic Ocean. Then it becomes the Gulf Stream, losing heat to the atmosphere as it moves northwards. This causes the density of the water to increase, sink and flow south in the lower part of the ocean to the Antarctic. If we could tag a small amount of water and follow its

journey around the globe, we would find that most of the time it is isolated in the dark and cold deep ocean. It would only appear on the surface about once every six hundred years, and then only in the Southern Ocean south of the ACC. In the tropics and sub-tropics, a thin surface layer of warm, lighter water prevents deep water from coming to the surface, but south of the ACC this warm layer disappears and no longer stops the upward movement of deeper water. This process ventilates the ocean. When deep water reaches the surface, it gives up heat to the much colder atmosphere and picks up dissolved atmospheric gases, including carbon dioxide and oxygen.

The challenges for researchers Research on ocean circulation in the Southern Ocean is always going to be very difficult. Not only is the ocean stormy, and good weather hard to come by, but the area to cover is vast. Gathering meaningful data can be compared to trying to find out about a large river by analysing a couple of drops of water every few kilometres. Oceanographers therefore choose their methods and sites for gathering data very carefully. Because the ACC is linked to the three major oceans and is important in global ocean circulation and ocean climate, it is essential that its flow is understood and monitored so that any changes can be detected. The Macquarie Ridge and the Campbell Plateau create a strategic marine junction which is one of the few places where the ACC deviates from its relentless circling of the globe. This ridge, effectively an underwater mountain range about 2000 – 3000 metres high, stretches for 1,400 kilometres south towards Antarctica and has been formed where the Pacific Plate meets the Indo-Australian Plate. In 2007, NIWA scientists on board the research vessel Tangaroa, dropped nine moorings containing metering instruments in two gaps or ‘choke points’ in the Macquarie Ridge through which the ACC squeezes. The moorings were over 3500m long and were anchored to the bottom of the ocean by old railway engine wheels. (Refer Figures 3 and 4) Current recording meters measured and recorded the speed and direction of the current at fixed positions under the surface, with the intention of building up a picture of how the ACC flows through this Ridge. The data on the speed and volume of the ACC was collected continuously for a year and the moorings picked up in April 2008. Scientists have been astonished at the speed of the current which was found to be about 4km hr-1. This is about the speed an adult would walk quickly and is very fast for an ocean current. Scientists also took temperature and salinity readings for the first time in this area since the 1960s, looking for climate-related changes. The data collected, although not completely analysed yet, will be used as a benchmark to compare with data from other places, such as from moorings in the Drake Passage. This will give an idea of how much water is flowing out into the Pacific and how much is staying to circulate around the Southern Ocean. The data will also be used to determine potential changes in circulation by measuring changes in salinity, temperature and density of the ocean. Such changes could have an as yet unknown effect on global climate as the thermohaline circulation is finely balanced. The energy and extent of the deep and shallow flows depend upon a balance between evaporation and fresh water supply, temperature distribution through the ocean, and wind patterns. Any or all of these factors may change as global warming continues. For further information contact m.williams@niwa.co.nz


NZ

science teacher 119

What affects the diversity of invertebrates in our streams? It’s not how much water, but how it arrives, as Dr Russell Death, an ecologist at the Institute of Natural Resources – Ecology, Massey University, explains: The archetypal view of ecological research is that of a ‘Jane Goodall’ type dedicated scientist who spends months out in the wilderness studying and recording all the details of a population of endangered animal or plant. However, ecological science is not always about exploring the intimate details of the lifestyles of a single species. Often ecological research involves understanding how all the species living in a community interact together, and how changes to one component (e.g. removal of a top predator) can often lead to unexpected consequences in some seemingly unrelated component of the community because of a ripple effect through the food web. The unexpected consequences of many of the exotic species introduced into New Zealand are a classic example. Many of the hypotheses ecologists spend their time exploring are as complicated and quantitative as anything in physics, mathematics or chemistry. In fact, many modern science concepts such as fractals, chaos, complexity and emergent properties have all been developed in or by ecologists. I research invertebrate communities that inhabit streams and rivers and what characteristics of those streams determine how many species occur in the communities. One of the biggest influences on the diversity of stream invertebrate communities is the frequency and severity of flood events. Floods have two main effects on invertebrate communities. Firstly, they result in the stones on the bottom of the river being entrained into the water and washed downstream. Most of the insects and other invertebrates that live in amongst the stones have hooks, suckers and streamlined bodies to allow them to hold on even under very fast flows. (Figure 1) However, if the stones they are clinging onto are washed away, they are too. The second effect is the removal of the microscopic algae that grow on the surface of those stones and act as a food resource for the invertebrates.

Disturbance hypotheses The most popular hypothesis in ecology relating disturbance (as in the case of floods) to the diversity of ecological communities is the intermediate disturbance hypothesis (Connell 1978, Begon et al. 1990). This proposes that when disturbance frequency is high, species will be removed and diversity will be low. However, when disturbances are rare, the population size of species builds up and the stronger competitors that can better utilise resources lead to the competitive exclusion of some species and again diversity declines. At some intermediate frequency of disturbance neither competition nor mortality can dominate and diversity peaks. To test the applicability of the intermediate disturbance hypothesis, I have sampled invertebrates in streams that differ in their disturbance regime. Some streams drain springs and have a very constant flow of

water

biodiversity in stream invertebrate communities water, while other streams are fed predominantly from rainfall and are consequently more flood prone. The degree of disturbance has been assessed by measuring how far painted stones in the streams move over a year, or more recently we have been securing radio-tags to the stones and tracking how far the stones move using radio waves. We have found no support for the intermediate disturbance hypothesis (Death and Winterbourn 1995, Death 2002). Spring-fed streams which have stable flows have the greatest diversity with a linear decline in diversity as the frequency of flood events increase. This suggests that competitive exclusion is not occurring in these stream communities. Thus in the very stable streams there are no species that grow so prolifically or monopolise resources to the extent that other species are driven to extinction. This is an outcome that contrasts with the often held view of competition being a dominant structuring force in ecological communities.

Investigating the relationship between disturbance and diversity The question then, is what drives this relationship between disturbance and diversity in these stream communities; is it a result of animals being washed away by floods or the removal of their food resource (algae)? To investigate this, one of my students, Erna Zimmermann, and I have examined streams draining Mt Taranaki. Many streams arise on the mountain amongst the native forest of Egmont National Park and flow down to the coast through farmland where the forest canopy has been removed. Streams inside the National Park under the forest canopy are light limited and thus algae does not grow as prolifically as it does when the forest is removed. Again there are streams that differ in their disturbance regime, based on whether they are spring-fed or runoff-fed. Consequently we had sites on a number of streams differing in stability; some of the sites were inside the forest where light limited algal growth and others were close by on the same stream but in farmland (but still close enough to the National Park so that nutrients and sediment were not affected). We found a similar linear declining relationship between disturbance and diversity to my previous studies at the sites outside the forest, but no relationship between disturbance and diversity at the sites inside the forest (Death and Zimmermann 2005). We concluded that this was because the invertebrate communities inside the forest did not rely on the algal food resource to regrow before they could recolonise (they feed on forest detritus that is not affected by floods). Outside the forest where the food webs are based on algae, recovery of the invertebrates is mediated by the regrowing algae. Thus floods are affecting the invertebrates not so much by physically washing them away as by removing their food resources (Death 2008). It might be possible to investigate similar ideas in a local stream as a school research project. You can remove the algae attached to stones by scrubbing them with a nylon

11


NZ

science teacher

water

119

Figure 1: New Zealand stream invertebrates. Clockwise from top Blephariceridae, Aoteapsyche sp., Zelandoperla sp. and Neozephlebia scita. Photos courtesy Stephen Moore, Landcare Research.

Figure 2: Clockwise from top, Spanish spring, New Zealand spring, New Zealand runoff-fed stream and Spanish runoff-fed stream.

12


Artificial intelligence modelling

brush or scour pad. Using similar sized stones, some of which have the invertebrates and algae removed, and others that only have the invertebrates removed (by gently brushing with your hand). Leave them for about a week to recolonise, and then collect each stone by putting it into a net held downstream and count how many different types of invertebrate are on the two types of stone (with and without algae). You could also add a further treatment level by simulating removal of invertebrates by a flood to some stones (removal by gently brushing with hand or brush) and not others, with and without algae. Interestingly, the pattern of a low number of invertebrate species in flood prone streams and a higher number in stable spring-fed streams does not seem to occur everywhere in the world. With another one of my students, Pepe Barquin, I have examined the invertebrate communities in springs and runoff-fed streams of Cantabria in Northern Spain. This is a very mountainous region of Spain with a similarly high rainfall to New Zealand spread throughout the year. In fact, if you ignore the deciduous forest surrounding the streams they look very similar to streams in New Zealand (Figure 2). We found the invertebrate communities in these springs, just like in New Zealand, had lots of mosses growing, high biomass of periphyton and larger numbers of invertebrates than similar runoff-fed streams in the same area. However, the springs had a much lower number of species than the runoff-fed streams (Barquin and Death 2004). There are a number of possible reasons for this. The invertebrate communities in Spanish springs are dominated by amphipods and they may be egg predators that prevent a number of other taxa establishing in the springs. Alternatively, it may be because many insects in the Northern Hemisphere require thermal cues to mature to adults and emerge. As springs have very constant temperatures (± 1 °C throughout the year) the lack of these cues may prevent the life cycles of many animals being completed. In New Zealand, adult aquatic insects can emerge at any time of the year. They do not require the same thermal cues, and thus can survive perfectly well in the constant temperature of springs.

NZ

science teacher 119

water

Figure 3: Map of predicted QMCI values for the Manawatu-Wanganui region.

Sampling individual streams is very time-consuming, so if we consider our knowledge about what environmental factors affect invertebrate communities, we could use that information to predict the types of communities that occur in streams throughout a region. We are using artificial intelligence (AI) modelling techniques developed in computer science to learn the patterns between the environment and invertebrate communities. These AI techniques function very much like the brain in learning by looking at a pattern and trying to repeat it, then seeing how good the fit is to the data. Then you change the model of the pattern a little and see if it is any better. If it is, change it again a little more in this direction. If not, try a change in the opposite direction. Repeat this until the fit is as good as it can be. Modern computers are ideal for this task, lots of small changes repeatedly made over and over. We have been using an AI modelling technique called a Bayesian belief network (BBN) to model the relationship between environmental characteristics and invertebrate communities. There are GIS (Geographic Information Systems) layers for a number of environmental variables associated with all the rivers of New Zealand developed by NIWA (Snelder and Biggs 2002) that can then be used to extend the model predictions from the sampled streams to all the streams in a region. In Figure 3 I have mapped out the QMCI (Quantitative Macroinvertebrate Community Index), a measure of water quality, predicted by a BBN model for the Manawatu-Wanganui region. You can clearly see the higher water quality in the State Forest and National Parks of the region. We have made considerable progress in understanding the relationship between stream invertebrate communities and the associated environmental characteristics to the extent that we can make some highly precise models and maps of the invertebrate communities in many streams and rivers. However, we still have a lot more to learn. For example, despite all the concern over high nutrient levels in our rivers and streams, we still don’t know how much nitrogen (N) or phosphorous (P) we can add to our streams before the invertebrate communities are adversely affected. I am currently trying to extend the models to predict how much N or P is too much for a river community. Nor in the face of increasing needs to remove water from rivers for irrigation and hydropower do we know what frequency of flooding we need to keep to prevent the invertebrate communities being adversely affected. There is clearly still a lot more to learn before we can confidently manage our rivers and streams in the face of increasing anthropogenic pressure.

References Barquin, J., & Death, R. G. (2004). Patterns of invertebrate diversity in streams and freshwater springs in northern Spain. Archiv für Hydrobiologie, 161, 329-349. Begon, M., Harper, J.L., & Townsend, C.R. (1990). Ecology: Individuals, Populations and Communities. 2nd edition. Blackwell Scientific Publications, Oxford. Connell, J. H. (1978). Diversity in tropical rain forests and coral reefs. Science, 199, 1302-1310. Death, R. G. (2002). Predicting invertebrate diversity from disturbance regimes in forest streams. Oikos, 97, 18-30. Death, R. G. (2008). Effects of floods on aquatic invertebrate communities. In J. Lancaster and R. A. Briers, editors. Aquatic Insects: Challenges to Populations. CAB International, UK. Death, R. G., &Winterbourn, M.J. (1995). Diversity patterns in stream benthic invertebrate communities: The influence of habitat stability. Ecology, 76, 1446-1460. Death, R. G., & Zimmermann, E.M. (2005). Interaction between disturbance and primary productivity in determining stream invertebrate diversity. Oikos, 111, 392-402. Snelder, T. H., Biggs, B. J. F. (2002). Multiscale River Environment Classification for water resources management. Journal of the American Water Resources Association, 38, 1225-1239.

13


NZ

science teacher

water

119

reverse osmosis Reverse osmosis not only has applications to extract water from seawater, but also in the food industry as Ken Morison, Department of Chemical and Process Engineering, University of Canterbury explains: Reverse osmosis is a process that is used widely throughout the world to produce fresh water from seawater. It is one of a range of very fine filtration processes that are used to separate molecules of different sizes. The other main types are ultrafiltration and microfiltration. Reverse osmosis is the finest filtration and allows only water to pass through the filtration membrane. (Refer Figure 1)

Osmosis Before discussing reverse osmosis, it is useful to learn about normal osmosis. Osmosis is a process by which water (or any other solvent) moves by diffusion from areas of high water concentration to areas with low water concentration. Normally the concentration used is the mole fraction of water, i.e. the number of moles of water divided by the total number of moles of all species. Normally the two solutions are separated by a membrane. If the membrane is permeable to water only, we can sometimes observe a volume or pressure change. Such membranes are sometimes described as semipermeable. Osmosis can be demonstrated easily using a cube of potato that is submerged in concentrated salt solution

(brine). The potato contains many cells that have cell membranes that are more permeable to water than to other species. The mole fraction of water in the salt solution is lower than inside the potato, so water diffuses out of the potato into the salt solution and the potato gets smaller. A cube of potato swells when placed in pure water as the water diffuses into the potato which contains a slightly lower fraction of water. (Refer Figure 2) We can calculate the pressure that can be produced once the diffusion gets the system to an equilibrium. It is normally given the symbol p. p=–

RT V

ln xw

Here R is the gas constant (8.314 J mol–1K–1), T is the temperature in kelvin, V is the molar volume of water (1.8x10–5 m3mol–1) and xw is the mole fraction of water in the solution. This equation provides a good opportunity for some practice in conversion of mass quantities to molar quantities. There is a trick here that one mole of a salt dissociates into two or more moles of ions when in solution, and these ions are counted as individual species. The osmotic pressure of seawater (about 3.5% salt) is about 25 atmospheres (2.5x106 Pa). This number indicates that it would take a pressure of 25 atmospheres to stop pure water from diffusing through a water permeable membrane into seawater.

Figure 1: Molecules and cells are retained or can pass through different filtration membranes.

14

Figure 2: A cube of potato will shrink in salt water, and swell in water, because of osmosis through the cells’ membranes.

Figure 3: Osmosis and reverse osmosis through a membrane.


Reverse osmosis

Reverse osmosis applications 1. Fresh water production The largest scale use of reverse osmosis is for the production of fresh water from seawater. Many countries in the Middle East use reverse osmosis to produce fresh

NZ

science teacher 119

water

Reverse osmosis is a process where water is forced in against the expected direction of osmotic flow. In the case of seawater, if more than 25 atmospheres of pressure is applied to seawater, water can be forced to move in the opposite direction through a membrane that is permeable to water only. In industrial processes, pressures of up to 80 atmospheres are applied across the membrane. (Refer Figure 3) This process is used to produce fresh water from seawater, or from slightly salty water. The membranes used for ultrafiltration are normally made of special polymers such as polyamide with a backing that makes it physically strong enough to withstand the pressure. Some of the early membranes were not unlike cellophane which is still used for jam jar tops. When it is wet, water can diffuse through it, though salts and sugars cannot. All the membranes have a very fine structure that allows only water to pass through. The membranes are normally rolled up into cylinders with seals in the appropriate places. Such a membrane is shown in Figure 4. The largest of these are about 200mm in diameter and one metre long and have about 40m2 of membrane area. When a pressure of 80 atmospheres is applied, they produce about 600 litres of water per hour.

water, though the other method of distillation is still the most popular. Israel: Israel has chronic problems over water resources. This called for the construction of a series of plants along the Mediterranean coast to enable an annual total of 400 million m3 of desalinated water to be produced by 2005, mainly for urban consumption. According to the plan, production is intended to rise to 750 million m3 by 2020. The largest plant in the world is in Ashkelon, in Israel and it produces 320,000 m3 per day. That’s enough water to fill a tank the size of a rugby field and about 32m deep each day, and is around 13% of the country’s domestic consumer demand. The membrane area required for this is probably about 1km2. The total project cost was approximately US$250 million. The flexibility and high efficiency of the plant has reduced the water cost to US$0.52 per m3. The plant has several 5.5 MW high-pressure pumps and requires so much electricity that a dedicated gas turbine power station, fuelled by natural gas, has been built adjacent to the desalination plant. The provision of a dedicated power plant is a major factor in both safeguarding operational reliability and reducing energy costs, as it offers protection from daily or seasonal demand fluctuations. The desalination system is expected to run at a continuous base load for most of its operation. (Refer Figure 5) Australia: It was recently announced that Sydney would build a reverse osmosis plant to provide water. They plan to power it using wind power and only run it when there is sufficient wind. Perth already has a plant and Melbourne is considering building one also. Reverse osmosis requires pumps that can produce pressures of perhaps 80 atmospheres, and these consume a significant amount of electricity. It can take about 10kWh of electricity to produce 1m3 (1000 litres) of water, but with energy saving systems this can be reduced to about 4kWh per cubic metre. The cost of producing fresh water is thus strongly related to the price of electricity. Small reverse osmosis plants are also used on ships, by the military, in laboratories, and in homes to provide pure water.

2. Waste water treatment Some processes generate large amounts of waste containing a dilute pollutant. Reverse osmosis can be used to concentrate up the waste for appropriate reuse or disposal while at the same time producing water that is clean enough to discharge into land or into a river. This concept has been applied in New Zealand to the treatment of water that has been used to wash whey from casein protein obtained from milk. The recovered whey can be further processed while the water is reused or discharged. Domestic effluent can also be processed by reverse osmosis to produce clean water, but the concept is considered by the public to be distasteful in more ways than one.

3. Concentration of liquid foods

Figure 4: A 100mm diameter spiral-wound ultrafiltration membrane (cut open).

There are many foods that get concentrated by removing water as part of the process in producing useful products. One method of concentrating solutions is to boil them and evaporate off the water. This requires a lot of energy, so very efficient multi-stage vacuum evaporators are often used. However, some concentration can be carried out with even greater efficiency using reverse osmosis.

15


NZ

science teacher

water

119

Figure 5: Part of a typical large-scale reverse osmosis plant for water production.

Reverse osmosis has the advantage of being carried out at low temperatures, so that the product is not damaged by the temperature that would be required for evaporation. Milk: Dairy companies use reverse osmosis to concentrate milk before it is transported by truck to the factories. Where there are many dairy farms that are a significant distance from the processing factory, it is economic to set up a small plant that removes only water and hence reduces the volume that needs to be transported. During cheese making both cheese (curds) and whey are produced. The protein is removed from whey using ultrafiltration, which is very similar to reverse osmosis but the holes in the membranes are large enough to let water, lactose (milk sugar) and salts through. The product that passes through the membrane is called the permeate. Before crystallisation, the permeate is often concentrated by reverse osmosis to reduce energy consumption. Other food liquids: Reverse osmosis is used to concentrate maple syrup efficiently. The final

16

concentration is still done by boiling. Fruit and vegetable juice is also concentrated in this way, possibly giving a fresher flavour as there is no heating required.

Conclusion Reverse osmosis is used throughout the world for the production of huge volumes of fresh water. It is also widely used in New Zealand to improve the energy efficiency of a number of dairy processes. For further information contact: ken.morison@canterbury.ac.nz

Relevant websites Dow Water Solutions: http://www.dow.com/liquidseps/news/20070710a. htm Ashkelon Desalination Plant: http://www.water-technology.net/projects/israel Wikipedia: Reverse osmosis: http://en.wikipedia.org/wiki/Reverse_osmosis


NZ

science teacher 119

How does recreational fishing affect the physiology, behaviour, and welfare of fish? Peter Davie, Professor of Veterinary Anatomy, and Keller Kopf, PhD student, School of Agricultural and Veterinary Sciences, Charles Sturt University – Wagga Wagga, explain: Introduction Catching fish by hook, line, rod, and reel is a wellestablished recreational pursuit. As many as 850,000 New Zealanders, or 25% of the population, engage in recreational fishing, and it has been estimated that globally about 47 billion fish are caught (killed or released) annually by recreational fishers; a statistic similar to the numbers of poultry killed for human use each year. Recreational hunting and fishing are similar in many respects in that they are designed to capture or kill wild animals. Current hunting ethics regard the immediate death of the hunted animal, (i.e. ‘clean kill’) as best practice. In this sense, hunting is clearly different from fishing because part of the sport of fishing is the capture process, often called the ‘fight’ or ‘play’ by anglers. Yet longer times for the capture process may be considered a compromise of fish welfare by prolonging the potential stress, pain, and/or suffering caused by fishing. New information on fish welfare is arising from research driven by three major areas. First, growth in fish farming that provides around 25% of the total world fish production. Farming animals for human consumption requires slaughter processes to ensure food and operator safety, and there is a growing market for products from humanely farmed-and-slaughtered animals. Second, research programmes describing the anatomy, physiology, and behaviour of fish suggests that “fish are more likely to be sentient than not.” Third, a significant body of literature now exists on catch-and-release fishing, where recreational fishers aim to conserve the fishery by releasing the fish alive.

Can fish suffer? Or more importantly are they consciously aware of their suffering? To be able to suffer, an animal needs to consciously perceive the stimuli as unpleasant, harmful, or painful. This is sentience. Table 1 lists examples of mental states that can contribute to suffering in animals. In order to suffer, an animal must possess a sensory system able to detect noxious stimuli, and importantly the brain must consciously perceive the stimuli as negative. Stimuli that do not reach the consciousness do not cause suffering, and in one view, do not represent

water

recreational fishing impacts on fish welfare welfare compromises. Some of the states listed in Table 1 (e.g. fear) have been identified in fish, and so it seems fish are deserving of welfare consideration.

Impact of fishing gear on fish welfare 1. Hooks All hooks inflict injury, and by definition, negatively influence fish welfare. Hooks that minimise tissue trauma, reduce rates of internal gut hooking, and foul hooking improve fish welfare. The species of fish and size of the hook, shape, arrangement, number, and presence of barbs are major differences which influence the severity of tissue trauma and anatomical location of impalement. (Refer Figure 1). Eye

Tanges Shank Gape Barb Barbed circle hook

Barbed J-style hook

Barbed double hook

Barbed treble hook

Figure 1: Styles and arrangements of hooks commonly used in recreational fishing. Size of the hook is an important factor influencing the anatomical location of hook impalement, such that smaller hooks (of all styles) cause higher rates of deep hooking while larger hooks more frequently cause foul hooking. It is recommended that hook size be matched to the morphology and gape of the mouth in the species of fish being targeted. Style of hook is also an important consideration for fish welfare. J-style hooks are the hooks most commonly used by recreational fishermen, and more frequently cause injury to vital organs compared with circle hooks, which usually become lodged in the jaw. However, consideration for the species being targeted is important, and hook modifications – such as offset circle hooks – are not as effective at reducing rates of injury. Treble hooks generally exhibit decreased rates of deephooking, but when deeply lodged, the tissue damage

Table 1: Emotional and mental states that can contribute to suffering in animals. The left column lists basic mental states, and the right column lists more complex mental states that may only be experienced by mammals. Negative emotional and mental states fear irritation starvation sickness frustration fatigue thirst

Modified with permission from Gregory (2004).

anxiety phobia boredom depression pain distress nausea loneliness

sadness bitterness anguish mental illness paranoia despair torment longing

17


NZ

science teacher

water

119

can be more extensive than with single or double hooks. Other problems encountered when using treble and double hooks are the extended time/air-exposure required to remove multiple hooks, increased chance of entanglement in dip nets, and the presence of multiple puncture wounds in fish. Barbs and tanges generally increase the severity of hook injury by inflicting greater tissue damage and bleeding than barbless hooks. Additionally, barbs can increase handling time and exposure to air by making it difficult to remove hooks, which significantly influences rates of mortality. Barbed hooks can easily be modified into barbless hooks by crimping with pliers. Not removing hooks in fish that are released can facilitate infection, and depending on the metal and anatomical location of impalement, may remain inside the fish for months or years. If hooks cannot quickly (< 30 sec) be removed without causing significant tissue damage, the fish should be immediately euthanased or released by cutting the line as close as possible to the hook. 2. Line, rod, and reel A major consideration for selection of different lines, rods, and reels and their effects on fish welfare is the match of the gear to the environment, species, and size of fish being targeted. Balanced gear ensures the angler has control over the fish, which facilitates best handling procedures by reducing the number of break-offs, minimising the duration of capture, and by reducing rates of injury. Fish that break off may suffer from being semi-permanently impaled by a hook with a line trailing behind. Light gear promotes breakoffs and may increase the duration of capture, which also negates conservation merit if the fish subsequently dies. 3. Live bait, lures, and flies Different rates of deep hooking, foul hooking, and mortality have been observed between baits, lures, and flies, but there does not appear to be a general pattern applicable for all species and methods of capture. Rather, differences in injury appear to be related more to the presentation of the tackle, or type and size of hook being used. Where live bait is used, the welfare of the fish presented as live bait may be compromised if humane procedures are not followed (see Euthanasia).It is common practice in some fisheries, such as marlin (Istiophoridae) fishing, to bridle live fish. The process involves stitching the live bait, usually through the orbits, to a large hook and towing or drifting it behind a boat. Live baiting is not essential to capture most predatory fish, and is discouraged if fish welfare is to be considered. 4. Landing nets, keepnets, live wells, and gaffs Landing nets are commonly used by recreational fishers to assist in handling fish during capture. Because landing

nets can extend the time required to remove hooks by becoming entangled with the hook, line, and fish, they should only be used when necessary. All nets cause some degree of pectoral and caudal fin abrasion, and may cause skin abrasion, which increases the risk of fungal infection in released fish. It has been found that 4mm diameter rubber, and 1mm knotless nylon nets were most effective at reducing skin damage and mortality rates, while coarse and fine-knotted nets caused the most frequent and traumatic injuries. Keepnets, live wells (also called bait tanks), and similar devices such as ‘tuna tubes,’ are occasionally used to store live fish. Keepnets and live wells prolong the negative influences of catching fish by hook-and-line, in that the capture is followed by confinement. Other retention devices such as stringers and fish baskets can result in significant injury and can increase rates of postrelease mortality. Tuna tubes were developed by marine recreational fishers to maintain live bait species that normally die in bait tanks. Live bait species such as skipjack tuna, (Katsuwonus pelamis) ventilate their gills by swimming forward with their mouth open, a strategy called ram ventilation. In a bait tank, their ability to ventilate is limited. A tuna tube pumps water through a pipe in the direction of the fish’s mouth and ventilates the fish in a confined area. While tuna tubes reduce bait mortality, the confinement does not improve fish welfare. A gaff is made of a pole with a sharp hook fixed at the end which is used to penetrate the flesh and bone of a fish. The hook may detach from the pole when using a ‘flying gaff.’ Gaffing is used to control large (> 5 kg) fish when landing nets are too small, and the accuracy of delivering a stunning blow is physically difficult, such as may occur over the side of a boat in rough seas. Gaffed fish show short-term avoidance swimming of a particularly strenuous nature. Gaffing is not recommended because injuries caused by gaffing often result in significant bleeding so that exsanguination (bleeding out) may precede stunning or death.

Euthanasia The principles of humane slaughter are the same for all animal species. These are: rapid loss of consciousness without any avoidable stress so that the animal feels nothing, followed by death as assessed by loss of brain function without regaining consciousness. Loss of consciousness is often induced in a commercial setting by either stunning (percussive or electrical), or chemically using anaesthetics. Death of unconscious animals can be achieved by the destruction of the brain or by anoxia caused by exsanguination/bleeding-out. The time to loss of consciousness in fishing is the period during which fish may suffer.

Table 2: Common methods of euthanasia used by recreational fishers that are deemed acceptable or unacceptable on animal welfare grounds Recommended

Acceptable if unconscious a

b

Pithing Percussive stunning (blow to head) and pithing (iki jime) or exsanguination (bleeding) or decapitation (neck cut) Decapitation Exsanguination

Unacceptable Asphyxiation c (removal from water) Hypothermia (ice slurry or freezing) Pithing, decapitation, or exsanguination without prior percussive stunning Percussive stunning without pithing, decapitations, or exsanguination

a

Close et al (1997) Difficulty in administering accurate strike makes pithing unacceptable without unconsciousness c Poly et al (2005) b

18


Post-release welfare issues 1. Stress Angling is a stressor, and if it is severe enough or protracted it culminates in a compound stress response causing metabolic exhaustion. Metabolic exhaustion primarily results from the utilisation and synthesis of metabolic fuels (i.e. glycogen, high energy phosphates), and can result in mortality or physiological disturbance persisting from a few hours to several days. 2. Barotrauma Expansion of gas in the gas bladder (barotrauma) may occur in fish that are rapidly brought to the surface. The level of bladder expansion and mortality rate of fish increases proportionally with increasing depth. In extreme cases, fish are unable to swim and may float on the surface for several hours after release. Weighted mechanisms that sink fish back to appropriate depths after barotraumas, appear to be effective at reducing mortality, but are still being tested. For some species of fish, puncturing the gas bladder with a sharp object has proven to be a successful method of improving post-release mortality. However, puncturing the gas bladder only works for certain species and can easily be performed incorrectly, resulting in further injury to organs. Thus, puncturing the gas bladder and releasing fish is only recommended where local fisheries’ regulations provide detailed instructions, or if the angler has suitable training. Euthanasia is recommended when fish experience barotrauma. 3. Feeding Feeding behaviour is an indicator of welfare, and in fish, it is influenced by the frequency and duration of stressful events. Best practice for the welfare of farmed fish recommends that trout should not be deprived of food for longer than 48 hours and salmon for longer than 72 hours. However, it must be considered that fish are not endothermic (as are mammals and birds), thus deprivation of food lasting from days to weeks may have little immediate influence on nutritional adequacy. One of the initial reactions of farmed fish to stress is an immediate drop in food consumption, and depending on the level of disturbance, may progress to cessation of feeding. Reduced feed intake and impaired gastrointestinal motility may affect fish welfare by reducing growth and influencing overall fitness. Post-release feeding behaviour of an angled fish may be delayed until the fish has recovered from exhaustion and injury. However, damage to eyes and mouth structures may permanently influence the ability and competitiveness of released fish to feed. 4. Growth Growth rates of wild fish are naturally variable, but a continued suppression of growth may be a sign of

impaired welfare. Most catch-and-release studies show little or no long-term effects of recreational capture on weight gain. But it has been reported that there is a significant decrease in growth between fish caught on ‘J’ hooks versus offset circle hooks. The differences in growth rate between types of hooks were attributed to an increased rate of injury from deep-hooking in fish captured with J-type hooks. Thus, catch-and-release must be considered a potential growth constraint. 5. Reproduction Catch-and-release has been shown to influence reproductive function and success in smallmouth bass and largemouth bass, (Micropterus salmoides) by provoking abandonment of nests, increasing predation of the brood when guard males are removed, and also by physically impairing the ability for parental care. Additionally, physiological and biochemical disturbances resulting from catch-and-release and handling may influence spawning success by altering concentrations of hormones. Few studies suggest that catch-and-release has irreversible affects on reproduction; rather it is more likely to influence the quality, quantity, or timing of reproductive output. Handling, exposure to air, and physical injury appear to be the most important factors influencing the success of spawning and behaviour in captured and released fish. 6. Disease Diseases in fish that are captured and released are often caused by tissue damage or skin abrasion, which promote secondary fungal and bacterial infections. The skin of fish is composed of six layers, of which the outermost layer of mucus is the primary barrier against infection from the external environment. Anglers should be careful not to damage the skin, as can easily result from the use of landing nets or dragging fish onto the shore or deck of the boat. Both bacteria and fungi are commonly found on healthy fish, and infection is usually caused by loss of mucus or skin abrasion. 7. Mortality Rates of mortality vary widely (0-95%) within and between species of fish, and previously cited factors that contribute to mortality, not in order of importance, are: duration of capture, size of the fish, handling time and exposure to air, location of the hook wound, presence and severity of injury, type of gear , the angler’s experience, and abundance of predators. In a review of 32 taxa of freshwater and marine fish, it was found that estimates of mortality following hooking were variable but rarely exceeded 30%. Most postrelease mortalities are related to metabolic exhaustion or lethal injury, e.g. bleeding gills, and usually occur within 24–48 hours after capture.

NZ

science teacher 119

water

Delay of slaughter by keeping captured fish in a tank, keepnet, or tethering on a line represents a welfare compromise. Fish should be euthanased if they are bleeding, injured, deeply hooked, hooked in a vital organ, severely exhausted, or if the hook is irretrievable without causing significant tissue damage. Therefore Euthanasia methods for fish should minimise the stress or suffering immediately prior to unconsciousness, while remaining practicable and safe for operators. Percussive stunning followed by pithing, exsanguination or decapitation is recommended as the most practical and humane methods for euthanasia of recreationally captured fish. Table 2 summarises methods used for euthanasia of fish.

Conclusion The paucity of scientific information about the influence of recreational fishing on fish welfare hinders the development of rational welfare protection measures. However, research on fish physiology, behaviour, anatomy, and cognitive abilities show that teleost fish possess similar qualities and exhibit responses that are seemingly analogous to other vertebrate species, notably chickens, which are afforded considerable welfare protection. Note: This article is an edited version of the review article ‘Physiology, behaviour and welfare of fish during recreational fishing and after release’ by the same authors, and was published in the New Zealand Veterinary Journal, 54(4), 161-172, 2006.

19


NZ

science teacher

water

119

fish tagging Tagging technology has enabled a better understanding of the movement and behaviours of key fish species as John Montgomery, Tim Sippel, Agnes Le Port – all from the University of Auckland, and Clinton Duffy, Department of Conservation, explain: On 29th May 2007, Minister of Fisheries, Rt Hon Jim Anderton wrote, “Pacific nations need to work together to keep their tuna fish stocks from being catastrophically depleted. This resource is the economic engine driving many Pacific Island economies, and we are connected by our responsibility to play our part in our corner of the globe.” In the above statement, Jim Anderton rightly pointed out the importance of high seas fisheries to the economies of Pacific Island nations. Managing, and conserving fish species that are wide-ranging and belong to no one nation is a challenging task, made even more challenging by the fact that we know so little about the movements and behaviour of these amazing animals. This article will provide an overview of how exciting new technologies are being used to study the movements of wide-ranging fish species, and it will briefly profile some of the current work being done by the University of Auckland in collaboration with other partners on marlin, bluefin tuna, and stingrays.

with a satellite and download their data via the satellite to our desktops. This means that these tags can be attached to fish such as great white sharks, or stingrays that are unlikely to be caught again.

Retrieving information from tags Information about position and movement can be retrieved from the data on the tag. To do this the first step is to check the clock on the tag has kept exact time. From there, longitude is estimated by calculating the time of mid-day or mid-night from the changing light levels, statistical routines on the tag allow this to be done with impressive accuracy. Next, variations of day length can be used to approximate latitude. However, accurate latitude estimation is challenging, and time periods around the equinox (autumn and spring) and true positions close to the equator are most difficult to estimate the latitude. This technique of fixing position using sunlight data is known as light level based geolocation (Hill 1994; Hill and Braun 2001). (Refer Figure 1)

Tagging technology There are basically two types of tags used for large fish: satellite telemetry; and archival tags.

1. Satellite telemetry tags For large animals which surface regularly, it is possible to track their movements directly through satellite telemetry using Argos satellites. This technique has proven to be valuable on many sharks, and more recently on New Zealand’s striped marlin. However, most fish don’t surface regularly enough to use this method of direct tracking, requiring more innovative approaches for studying their movements.

2. Archival tags

20

To study the movements of large fish species such as great white sharks or tuna, a different technology based on ‘archival’ tags has proven to be very useful. The heart of this tagging technology is a small computer chip with a highly accurate clock combined with external sensors which can store (or ‘archive’) millions of data points. Typically, the sensors record temperature, depth, and light level, and these data are stored within the computer chip every sixty seconds for up to several years. Some versions of these archival tags are implanted inside a fish (IAT or Implantable Archival Tags), and the data are recovered when the fish is recaptured. Another version of these archival tags is attached externally to the fish, and the tags are programmed to automatically release from the fish and transmit data summaries to Argos satellites (PAT or Pop-off Archival Tags). PAT tags are basically the same as archival tags, but just have the added sophistication of being able to be programmed to pop-off after a set period and float to the surface. Once on the surface they make radio contact

Figure 1: Principal of light level based geolocation. For example, where would you be if you were a marlin and your tag was telling you that today was the 30th of July, and the time of sunrise was 19:29 GMT and sunset 5:33 GMT? The time of midday at 00:31 GMT puts you on a longitude of 174.45E, the day length of 10 hours and 4 minutes puts you at 36.51S. That is to say you are under the Auckland harbour bridge, and should probably be about to turn around and head back out to sea. (Note: data for sunrise and sunset on July 30th at Auckland was found at: www.rasnz.org.nz/SRSStimes.htm#July). Other data on the tag can also help with position fixing. Cross referencing sea surface temperatures measured by the tag against surface temperatures measured by satellites enables latitude estimates to be refined significantly. This is because temperature generally stratifies along a latitudinal gradient, enabling these data to further inform estimates of latitude (Beck et al., 2002; Teo et al., 2004). When all is said and done these methods can estimate longitude to within ±0.5° and latitude to within ±1.0-2.0° accuracy (Teo et al. 2004; Nielsen et al. 2006; Wilson et al. 2007). (Refer Figure 2) The movement information combined with the other data on the tag also provides important information on the fish’s behaviour. Is it migrating and travelling large


Through links to TOPP, regular updates of marlin tagged by Tim can be viewed on the non-TOPP website hosted by the Tagging of Pacific Pelagics programme at: http:// las.pfeg.noaa.gov/nonTOPPtags/ (Refer Figure 3).

Figure 2: Sea surface temperature map of the Pacific Ocean from NOAA (USA).

Figure 3: Movement track of a striped marlin carrying a satellite telemetry tag from Eastern Bay of Plenty to French Polynesia (in 2007). From the data obtained to date, it is clear that the marlin fan out across the tropics for the winter and then migrate back to New Zealand waters to feed over our summer.

In August 2006, Blue Water Marine Research (NZ) partnered with TOPP to satellite tag the first Pacific bluefin tuna (Thunnus orientalis) from a rapidly growing recreational fishery off the west coast of the South Island. There are three species of bluefin tuna. Two are IndoPacific and one is Atlantic Ocean, with those from the Atlantic Ocean being heavily over-fished and previously considered for listing as threatened or endangered species. Pacific stocks are also heavily exploited, but their status is not well understood. Genetic sampling has revealed New Zealand to be the first place in the world where all three species of tuna were found simultaneously, which could have significant implications for conservation and management in both the Atlantic and Indo-Pacific. To date we only have data for six bluefin tuna with most of the records only being for about six months. (Refer to Figures 4 and 5). These records show how mobile these fish are travelling extensively around New Zealand and across the Tasman, however, at this point the movement records don’t give us a full twelve month picture of their migrations and movements, where they might be spawning, or how connected the populations in New Zealand are with those in the tropics, or in other oceans. In 2008, our plan is to tag an additional twenty Pacific

119

water

2. Bluefin Tuna

NZ

science teacher

distances each day, or is it milling about and feeding? How often and how deep is it diving? Is it feeding at the surface or at depth? Is it migrating at the surface or at depth? What is its migration route? Is there a return seasonal migration? Answers to all these questions are important in managing and conserving these fish.

Current tagging projects The above tagging technology is being used to help us better understand movements and behaviours of important fish such as marlin, bluefin tuna, stingrays, and great white sharks. Below, we have outlined some of our current projects with these fish species.

Figure 4: Tracks of Pacific bluefin tuna from PAT tagging. These tracks show the fish to have travelled widely across the Tasman and around New Zealand.

1. Marlin Tim Sippel is a PhD student who is working on striped marlin, and bluefin tuna. He’s been involved with striped marlin satellite tagging since arriving here from the USA in 2002. The initial success of satellite tagging striped marlin (Sippel et al. 2007) provided a building block for expansion of this research in New Zealand. The striped marlin and bluefin tuna research is being conducted in partnership with Blue Water Marine Research in Northland, NZ, and with an international programme called Tagging of Pacific Pelagics (TOPP). Pelagic – means swimming in the open ocean and pelagics include key Pacific predators such as tunas, billfishes, sharks, and marine mammals. TOPP is one of the cornerstone projects of the Census of Marine Life, a global effort to map and catalogue past, present, and future oceanic systems. One of the key aims of TOPP is to provide a framework for sustainable resource management and marine conservation. It is jointly run by Stanford’s Hopkins Marine Station, UC Santa Cruz’s Long Marine Laboratory, and NOAA’s Pacific Fisheries Ecosystems Laboratory. (Refer: http://topp.org/)

Figure 5: Water column profile transmitted by a Pacific bluefin tuna’s PAT tag. The fish was tagged in August and the tag was programmed to release and transmit its data six months later. Over this period the fish stayed in temperate or cool temperate water and it is clear that the deeper diving corresponded to periods spent in cool water.

21


NZ

science teacher

water

119

bluefin with PATs which will provide important information on movement and migration. These data will allow better assessment of the connectivity of tuna stocks in the South Pacific and the extent to which fisheries in one area impact on other areas.

3. Stingrays Another PhD student at the Leigh Marine Laboratory, Agnes le Port, has been using PAT tags to study the movements of stingrays at the Poor Knights Islands. Her work is aimed at understanding the behaviour behind the aggregations of rays that turn up at the Poor Knights each summer. The hypothesis is that these are breeding aggregations, and that these animals disperse widely over the rest of the year. If this is the case, this has important implications for population structure, and for their survival over the period when they are susceptible to trawl fishing mortality. This work has featured in a recent NZ Geographic article (NZ Geographic 90, March – April 2008, pp. 102-109).

4. Great White Sharks

sciencenews

Another tagging project, although not being conducted by University of Auckland we can’t pass up the opportunity to mention ongoing work by Clinton Duffy at the Department of Conservation, Dr Malcolm Francis and Michael Manning of NIWA, and Dr Ramon Bonfil of Shark-Tracker. Clinton and his colleagues have been attaching PAT tags to great white sharks around Chatham and Stewart Islands since April 2005. Their results show that white sharks tagged in these areas are generally resident around the islands for three to five months before undertaking relatively rapid long distance migrations. Three sharks tagged at the Chatham Islands all moved north towards the tropics in winter. One moved over 1000 km northeast towards the Louisville Ridge before its tag released, one travelled to New Caledonia, and the other to southern Vanuatu. One large female tagged at Stewart Island appears to have made an excursion south to Auckland Islands before travelling over 3000 km to Swain Reefs off Rockhampton in Queensland.

In conclusion, new technologies for tracking fish are providing us with important new information on the movements and behaviour of these important large predators. Not only are species such as tuna hugely important as a fishery, but as top predators all of these species provide us with key insights into the health of the oceans. New Zealand has the world’s fifth largest Exclusive Economic Zone, and an important responsibility to the South Pacific. Discovering the movements and behaviour of the top predators in this area is a contribution to “playing our part in our corner of the globe.”

Acknowledgement Figure 4 and 5 We would like to thank TOPP, Ministry of Fisheries, Blue Water Marine Research, and the National Research Institute of Far Seas Fisheries (Japan) for contributions to data represented in the image. For further information visit: http://www.marine.auckland.ac.nz/

References Beck, C.A., McMillan, J.I., & Bowen, W.D. (2002). An algorithm to improve geolocation positions using sea surface temperature and diving depth. Marine Mammal Science, Mar 18, 940-951. Hill, R. (1994). Theory of geolocation by light levels. In B.J. Le Boeuf, R.M. Laws (eds), Elephant seals: population ecology, behavior, and physiology, pp 227236. University of California Press, Berkley. Hill, R.D., & Braun, M.J. (2001). Geolocation by light-level, the next step: latitude. In J. Sibert, J.L. Nielsen (eds), Electronic tagging and tracking in marine fisheries, pp 443-456. Kluwer Academic Publishers, Dordrecht. Nielsen, A., Bigelow, K.A., Musyl, M.K., & Sibert, J.R. (2006). Improving lightbased geolocation by including sea surface temperature. Fisheries Oceanography, 15, 314-325. Sippel, T.J., Davie, P.S., Holdsworth, J.C., & Block, B.A. (2007). Striped marlin (Tetrapturus audax) movements and habitat utilization during a summer and autumn in the Southwest Pacific Ocean. Fisheries Oceanography, 16, 459-472. Teo, S., Boustany, A., Blackwell, S., Walli, A., Weng, K., & Block, B. (2004). Validation of geolocation estimates based on light level and sea surface temperature from electronic tags. Mar. Ecol. Prog. Ser, 283, 81-98. Wilson, S.G., Stewart, B.S., Polovina, J.J., Meekan, M.G., Stevens, J.D., & Galuardi, B. (2007). Accuracy and precision of archival tag data: a multiple-tagging study conducted on a whale shark (Rhincodon typus) in the Indian Ocean. Fisheries Oceanography, 16, 547-554.

census of marine life The Census of Marine Life is a ten-year initiative to assess and explain the diversity, distribution, and abundance of marine life in the oceans – past, present, and future. Ending in 2010, the Census involves a global network of more than 2000 researchers in more than 80 nations involved in 17 projects around the globe.

Census of Marine Life Projects include

22

Conclusion

ArcOD (Arctic Ocean Diversity): An international collaborative effort to inventory biodiversity in the Arctic sea ice, water column and sea floor, from the shallow shelves to the deep basins, using a three-step approach: compilation of existing data; taxonomic identification of existing samples; and new collections focusing on taxonomic and regional gaps. CAML (Census of Antarctic Marine Life): CAML will survey the cold Southern Ocean surrounding Antarctica in an attempt to understand the biological diversity of this unique and poorly understood environment. CeDAMar (Census of Diversity of Abyssal Marine Life): A deep-sea project documenting species diversity of abyssal plains to increase understanding of the historical causes and ecological factors regulating biodiversity and global change.

CenSeam (Global Census of Marine Life on Seamounts): A global study of seamount ecosystems, to determine their role in the biogeography, biodiversity, productivity, and evolution of marine organisms, and to evaluate the effects of human exploitation. ChEss (Biogeography of Deep-Water Chemosynthetic Ecosystems): A global study of the biogeography of deep-water chemosynthetic ecosystems and the processes that drive them. CMarZ (Census of Marine Zooplankton): A global, taxonomically comprehensive biodiversity assessment of animal plankton, including ~6,800 described species in fifteen phyla. COMARGE (Continental Margins): An integrated effort to document and explain biodiversity patterns on gradient-dominated continental margins, including the potential interactions among their variety of habitats and ecosystems. GoMA (Gulf of Maine Program): A project documenting patterns of biodiversity and related processes in the Gulf of Maine, which will be used to establish ecosystembased management of the area.


NZ

science teacher 119

There have been significant and rapid advances in technology to assist with measuring water quality and quantity, as Jeremy Bulleid and Graham Elley from NIWA’s Instrument Systems group explain: Our water resources and ecosystems need to be managed, and to be managed effectively they need to be measured. Only by collecting data over relatively long periods can we answer the questions about change – what is changing, how much, how do we know for sure – and differentiate natural cyclic behaviour from emerging trends. And the more accurate and reliable the data are, the more confident scientists and managers can be in their interpretation of it. This article describes some of the most important water-related parameters that are routinely measured; how water monitoring has progressed over time; the technology in use today; and some possible future developments. (Refer Box 1)

Measuring water quantity and quality 1. Water quantity In the past, the role of hydrologists was mainly related to catchments, rivers, and efforts to mitigate the effects of floods. Now, water is becoming increasingly precious, especially around urban areas and areas with high irrigation takes. Knowledge of ‘how much’ water is a crucial management tool, and often a mandatory requirement of the Resource Management Act (RMA). 2. Water quality Increasing awareness of the importance of water quality on human health and biodiversity, in both freshwater and seawater, means water quality monitoring is featuring ever more prominently, utilising more instruments with rapidly developing capability. The table Table 1: Monitored Parameter

water

measuring water quantity and quality below indicates some of the most common parameters measured, and the accuracy of measurements.

Advances in water monitoring The science and methods around water monitoring have advanced significantly over the past one hundred years, with sampling frequency, availability, and accuracy of data all increasing by several orders of magnitude. Here are some examples: Sampling frequency has progressed from early manual observations usually only made when there was a significant event such as a flood, through to readings recorded on electronic data logging instruments up to several times a second. Data availability has also improved dramatically, along with contingent improvements in data storage technology, both driving the trend towards recording and archiving an ever-increasing amount of data. Data availability has improved from being ‘virtually unavailable to all but a very few,’ to…’mostly available, in near real-time, to many.’ Increased capacity for recording observations has enabled the high frequency collection and storage of raw measurements that can be analysed and reanalysed by progressively more sophisticated techniques. Data accuracy and reliability has historically been poor, variable, usually unverified and, retrospectively, unverifiable. The accuracy of instruments and measurement processes has improved enormously. Measurements of primary parameters, such as water level, now routinely have an accuracy of much better than one percent. Parameter diversity has been increasing steadily, particularly in the water quality area. For example, one of many new parameters, chlorophyll fluorescence, allows us to measure the amount of photosynthesising organisms in the water.

Unit

Land River Lake Estuary Sea Accuracy (typical)

Water Quantity Level

mm

-

x

x

x

x

<0.5%

Flow Rate

litres per sec, cubic metres per sec

-

x

-

x

-

<8%

Rainfall

mm of rainfall

x

-

-

-

-

<2%

Soil Moisture

percent

x

-

-

-

-

<5%?

Currents

metres per second

-

-

x

x

x

<3%

Waves

height mm, length m

-

-

x

x

x

<3%

Turbidity

Nephelometric Turbidity Units (NTU)

-

x

x

x

x

<3%

Electrical Conductivity

Micro-Siemens per centimetre (mSi/cm) -

x

x

x

x

<3%

Temperature

degrees Celsius (˚C)

-

x

x

x

x

<0.3%

Salinity

grams per cubic centimetre (g/cm3)

-

x

x

<10%?

Dissolved Oxygen

parts per million (ppm) or % saturation -

x

x

-

x

Acidity (pH)

pH (log hydrogen ion concentration)

-

x

x

-

x

Nitrate

grams per cubic centimetre (g/cm3)

-

x

x

-

x

Colour dissolved organic matter Parts per billion (ppb)

-

x

x

-

x

Chlorophyll fluorescence

Microgram per litre (mg/l)

-

x

x

-

x

Underwater light level

Micromoles per second per metre2

-

x

x

-

x

Water Quality

0.02pH

<5%

23


NZ

science teacher

water

119

Box 1 - Water monitoring at Lake Forsyth - a case study The water quality in Lake Forsyth, on the Banks Peninsula near Christchurch, has been a long-term concern of local people, who are routinely advised not to drink the lake water or swim in the lake because of high levels of toxic blue-green algae. For many years, Environment Canterbury monitored lake levels. Ten years ago, water quality monitoring instrumentation was added to the lake level monitoring station, and five key water quality parameters have been monitored ever since then: • Water temperature • Electrical conductivity (indicates concentration of some nutrients) • Turbidity (indicates suspended sediment) • Wind speed (affects turbidity, an indicator of suspended sediment) • pH Parameters are monitored every 15 minutes, providing comprehensive long-run information about water quality in the lake. More recently,

Today’s technology Today, digital instruments are central to water monitoring. With cellular (GPRS and CDMA) and satellite communication technology and the Internet, logged data can be transferred to a central server. Scientists can view the data via the Internet, in both raw and processed form, very soon after the measurement has taken place, in ‘near real-time.’ The new ‘measurement-to-web’ approach is a significant change to the way we now approach environmental monitoring. No longer do we have to wait for weeks to see recorded changes in rivers, the sea, and climate. Now, near real-time data can be used as inputs into scientific computer-based models that can generate flood or low flow warnings, tsunami warnings and other potential threats to life and property, all available to people without the need for complex and costly infrastructure. (Refer Figure 1)

Figure 1: An example of a land-based telemetry network, monitoring and delivering data to, and receiving instructions from, a technician who could be located many miles from the network. Photos: Alan Blacklock & Marty Flanagan, NIWA.

the instrumentation at Lake Forsyth has been upgraded. Monitoring now includes dissolved oxygen concentration (which affects fish and other life in the lake). Data that used to be collected manually is now transmitted in near-real time back to a central server. From here ECan and Christchurch City Council can access recent & historical data via the Internet, enabling the lake’s managers to predict and monitor toxic algal blooms and other water quality changes. The long-run water quality data provides a baseline, against which current and future comparisons of water quality can be made. ECan already provides river and beach water quality data to the public via its website, and will consider adding near-real time information from Lake Forsyth in the future. The water monitoring station at Lake Forsyth. The station is solar powered, and relays data continuously back to a central server in Christchurch. Photo: Marty Flanagan, NIWA

Instead, we can lower a plastic bubble tube, through which compressed air is bubbled. A pressure sensor, located above ground, measures the pressure, imposed on the air in the bubble tube, resulting from the weight of water above the lower end of the tube(P=rgh). This pressure is converted into a water level value. A dairy company, for example, might use this information to manage its water usage while ensuring that it is complying with its resource consent conditions. 2. Monitoring and controlling irrigation water Delivering irrigation water to the right place at the right time requires accurate targeted control of water flow. A large number of flow rate monitoring and gate control systems are now operating on irrigation schemes throughout Canterbury. Typically, water level is measured downstream from the control gates with a rotary encoder system, and converted into a flow rate by applying a mathematical ‘level-to-flow’ relationship in the data logger program. Automation technology now means that race-men no longer have to go to a site to open a gate to let through more or less water. Flow targets can be changed remotely, via a laptop with a cellular connection, or via a cellphone with text messaging. And if something goes wrong at an irrigation site, technology allows an alarm to be sent automatically to a mobile phone as a text message. This alarm is triggered when data move outside a programed normal range. (Refer Figure 2) Figure 2: This text alert relates to a remote irrigation control station. The target flow has been set to 235 litres per second and is currently 238 litres per second. It is within the allowable settable tolerance of 10 litres per second so no alarm has been generated. Photo: Dave Gibb, NIWA.

Technology and systems in use

24

1. Monitoring water level in groundwater bores A ‘bubbler’ or ‘gas purge’ water level instrument is well-suited for monitoring groundwater as sensitive equipment doesn’t need to be located deep underground and underwater, where it is vulnerable.

3. Acoustic Doppler Current Profiler (ADCP) The ADCP is beginning to supersede the older mechanical current meters used for river gauging. ADCPs are sophisticated echo sounders which can measure water depth using a hydrostatic pressure sensor, and velocity


from suspended particles or air bubbles in the water. (Refer Figure 3)

Photo: Andrew Willsman, NIWA.

4. SMARTi Data transmission from multiple remote locations is made possible by the SMARTi – a universal serial data interface which enables multiple connections to instruments. A wide range of sensors – for example, from rainfall monitors to sophisticated underwater chlorophyll-detecting instruments – can be linked into data transmission networks via the SMARTi. 5. Sea water monitoring NIWA’s marine buoy in Golden Bay has a suite of aboveand below-water instrumentation, recording weather, water quality, wave and current data, and transmitting it via a GPRS cellular link to the web. The data are available via the Tasman District Council website: http://www. tasman.govt.nz/index.php?GoldenbayMetbuoyGraphs Marine aquaculture operations also use special buoys to monitor salinity and other parameters. GPS drifter buoys and underwater ADCPs are regularly used for mapping sea currents around bays to help assess aquaculture sustainability. (Refer Figure 4 & 5)

NZ

science teacher 119

water

Figure 3: The ADCP Traveller. This remote-controlled float-mounted Acoustic Doppler Current Profiler, invented by NIWA technician Andrew Willsman, moves backwards and forwards across rivers at a slow, steady speed, enabling high quality water flow measurements.

standard, facilitating integration into any system. Low power designs will bring down initial monitoring station power supply costs as solar panels, mounting hardware and battery size diminish in scale. 2. Modelling to predict hazards Modelling of processes such as flood warning, is gaining significant traction and utilises data collected by monitoring networks. For example, the ocean data collected by the Golden Bay buoy is being used by NIWA to test its EcoConnect hazard forecasting system. The data collected will help improve forecasts of extreme weather events, and their impact on people and property. Modelling of water resources is also becoming an indispensable management tool, as power generators, conservationists, and irrigators compete for finite supplies. For example, water metered points will be telemetered and the data pushed to a server.

Future developments – Water Quality 1. Recording new parameters New parameters will be recorded with increasing frequency such as sensors that can sense and bypass, mitigate or extract some types of pollutants from waterways. For example, at power substations, sensors for detecting toxic transformer oil intrusion into urban stormwater systems are already in their infancy. These will work by detecting and then automatically separating oily water and pumping it into a holding tank for safe storage. 2. Light absorption spectrometers Analysis at a molecular level will become widespread. Already selective electrodes for the detection of ammonium and nitrate ions are in use, and sensors capable of differentiating between other types of ion in solution will be driven by the need to improve and meet new international quality standards for urban drinking water and bottled drinks. 3. Monitoring and detection of water-borne coliform bacteria Some of the present measurements used for the management of wastewater, such as biological and chemical oxygen demand (BOD and COD), used as ‘indirect’ quality indicators, will in future be served with better instruments that can produce faster results. 4. Imaging technology It will become possible to routinely identify many different types of fresh water-borne organisms such as giardia, cryptosporidium and didymo, using continuous flow-through imaging instruments. Similarly, continuous imaging technology will be increasingly used for identifying and managing threats to coastal and marine biodiversity and biosecurity such as marine organisms that may be carried in ships’ ballast water.

Conclusion Figure 4 (left): NIWA’s marine buoy, deployed in Golden Bay to monitor both below and above water environment. Figure 5 (right): A salinity/turbidity buoy, typically used to monitor seawater for the aquaculture industry. Photos: Ralph Dickson, NIWA.

Future developments – Water Quantity 1. Monitoring developments Non-contact sensors will mean that instruments will not need to be out into the debris-filled, sediment-laden, fast-flowing water of a river in flood. Instruments will increasingly ‘talk the same language’ as technology moves towards the ideal of universal compatibility. The majority of sensors will adopt a universal connectivity

In New Zealand we are fortunate that we have been monitoring water quantity and quality for many decades and have significant expertise and information in this area. The technology available has shown some tremendous advances in the recent past, and will continue to develop apace. Water resource management will become increasingly sophisticated and automated. It is essential that we monitor water accurately; also, that we can interpret that data we gather and use the information wisely for current and future management decisions. The key is having well-trained scientists and technicians who can design and utilise the instrumentation to its full capacity, analyse and interpret the data, provide sound advice to managers based on accurate, long-run information, and have the vision to maintain the rapid progress that we have seen to date. For further information contact: j.bulleid@niwa.co.nz

25


NZ

science teacher

water

119

stormwater management on the move in New Zealand Have you ever wondered what happens to the rain falling on your roof or streaming down the gutters? And have you ever considered it as a source of metal pollution? Annette SemadeniDavies, a stormwater engineer for NIWA (Auckland) explains: Current issues in stormwater management Urban areas are arguably the most modified of human environments, and urbanisation affects all parts of the hydrological cycle to the detriment of local water resources (Figure 1). Population growth and construction go hand in hand to alter the flow pathways taken by water in a catchment. Removal of vegetation, increased imperviousness, and drainage via buried pipes means that urban hydrographs (flow time series) are characterised by high flow peaks and fast response to even minor rainfalls. URBANISATION

Phenomenon

Use of resources

Wasteand stormwater

Population Increase

Construction

Increased water demand

Land use change and increased impervious area

Increased waste water

Increased stormwater

Management consequences

Sewer and treatment plant overloads

Increased erosion and contamination

Resource consequences

Fewer water resources

Reduced environmental quality

Increased speed of runoff generation

Increased runoff volume

More and larger floods

Reduced natural landscape

Figure 1: Effects of urbanisation on local water resources. Urbanisation and urban activities can also lead to reduced water quality. As many cities in New Zealand are located on natural harbours or estuaries, the health of these receiving environments is closely linked to the quality of contaminants transported in urban stormwater. Suspended sediments in stormwater are a particular concern. Not only can high sediment loads potentially damage aquatic bottom dwelling communities by smothering or changing substrate grain size, but contaminants, especially metals, are often in particulate form and are deposited with the sediments. Williamson and Morrisey (2000), for instance, found an increase in the metal content in the bed sediment of Auckland estuaries that was associated with urbanisation. The presence of particulates in stormwater has profound implications for both the transport and removal of contaminants within the stormwater drainage network.

That is, by removing sediments, particulates are also removed. When treating stormwater, it is therefore important to know both the particle size distribution (stormwater sediments have a range in grain radius that covers five orders of magnitude; Table 1) and contaminant fractionation (i.e., content by particle size) so that the sediments most likely to cause damage at the receiving environment can be targeted. Suspended sediments cover clays to very fine sand (1 - 125 μm) and settling and filtration are the main means of their removal – the smaller the particle, the greater the mobility and the harder it is to remove. Generally, as the smallest particles provide the greatest geometric surface area to diameter ratio for bonding, they are associated with the greatest solid metal concentrations. However, metal partitioning and fractionation varies with stormwater properties and the physical characteristics of the sediments. For instance, Dempsey et al. (1993) and Sansalone and Buchberger (1997) found that the low pH can lead to desorption of metals originally bound to particles. Dissolved contaminants are problematic in that they cannot be removed with sediments and other treatments, such as absorption in filter beds are often required. There is a relationship between land use, contaminant type and concentration. A recent study that compared sediments transported by rural (Kaipara and Waikato) and urban (Auckland) streams showed that particulate metal content varies with land use (Bibby and WebsterBrown, 2005; 2006). A major source of metals and hydrocarbons is traffic, not only due to fuel exhaust, but to wear and tear on brakes and tyres and the road itself. Hence contamination hot spots include highways, access ramps, and intersections where cars are constantly braking and accelerating. Timperley et al. (2005) investigated sources of metals in stormwater, and found that the main sources were roofing (i.e. dissolved zinc); traffic (e.g. copper from brakes, zinc for tyre wear); and industry. Unpainted galvanised steel roofs are a real culprit when it comes to dissolved Zn, but painting or replacing these roofs with coated products effectively reduces or eliminates this source depending on the product. The growing fashion for copper roofs and guttering is a potential new source of dissolved copper. Since the removal of lead as an additive to petrol, this metal has low concentrations in stormwater with the primary sources being historical such as previously contaminated soils or wash-off from lead-based paint. Other contaminants include polycyclic aromatic hydrocarbons (PAHs), herbicides, pesticides, fungicides, plasticisers and hydrocarbons in oil and grease.

Table 1: Udden-Wentworth scale (Wentworth, 1922) for particle sizes in microns (μm) typically found in stormwater (after Makepeace et al., 1995) Very coarse sand 1000-2000

26

Coarse sand 500-1000

Medium sand 250-500

Fine sand 125-250

Very fine sand 62.5-125

Silt 3.9-62.5

Clay 1-3.9

Colloid <1


The drained city - combined and separate sewers So how did the situation discussed above come about? Most of us are familiar with drains, or catch pits, in gutters (Figure 2), but where stormwater goes after that becomes a mystery. The conveyance of stormwater via pipes is an historical legacy from the time when the main goal of urban water management was to get water off the streets as quickly as possible to avoid health hazards and flood risk. Water treatment to safeguard receiving waters was not considered in the design. Ancient Greek and Roman sewers are still in use in some European cites, however, widespread sewer building in the mid nineteenth century in the Western world was initially a response to the industrial revolution that saw a rapid increase in urban populations. Technological advances also enabled the indoor water closet to replace chamber pots, at least in well-to-do households that came to demand good sanitation. At that time, there was no distinction between storm and wastewater. Open cesspools and waste from horse-drawn transport ensured that urban surface water was a breeding ground for diseases such as cholera and typhoid. Typically, urban streams were channelised and later buried as part of the sanitary sewer network. These systems are known as combined sewers as they convey both storm and wastewater, originally directly to receiving environments, but nowadays to wastewater treatment plants. Combined sewers are designed so that stormwater helps convey wastewater and solid waste therein without blockage. The main problem is that they have outflows for heavy rainfalls which release untreated sewage into the receiving environments. This is why the public is often warned not to swim following rain in some locations. The old sewers can also be damaged over time so that groundwater is able to infiltrate which further increases the hydraulic load. Auckland City is a good example of this early phase of urban water management in New Zealand (see Bush, 1998; Auckland City Council website: http://www. aucklandcity.govt.nz/). After initial settlement in the 1840s, untreated stormwater and wastewater flowed via streams and open channels which constituted a hazard to human health. The channelisation of the Waihorotiu Stream, which became known as the Ligar Canal, typifies this phase. The stream flowed from a gully (now Myers Park) down the length of Queen Street. By the midnineteenth century, the stream/canal was bricked in

NZ

science teacher 119

water

Figure 2: Catch-pits are the first point of entry for stormwater to the reticulated network. They are designed with a sump so that coarse sediments can have a chance to settle before the water level reaches the height of the outflow drain. In some parts of the country, they may be fitted with bags which filter debris, litter, leaves and larger particles.

and buried as a sewer. Water percolating through the soil under Myers Park still runs into the old sewer to the Waitemata Harbour discharging under the Ferry Building. Night soil collection was still common at the beginning of the twentieth century, but at the same time, the city had entered a phase of sewer building and many of the city suburbs were drained. Around 15% of the sanitary sewers in Auckland City today are combined sewers, and there are over one thousand stormwater outfalls to coastal waters. Metrowater, which services the city, has an ongoing programme to separate these sewers; they report (Metrowater 2004) that some $30 million was spent between 2000–2004 on separation. Today, most urban sewers in New Zealand including post-war suburban developments are separated, that is, stormwater is conveyed in a different network from wastewater. Separation began in the middle of last century, and has become standard practice for new or renovated systems. Online devices for water treatment and storage are often installed in separated networks. These include sand and media filters of various configurations, hydraulic ‘vortex’ separators and buried tanks for storage and settling. Water is still largely conveyed below ground and is not part of the cityscape.

Entering the Water Cycle City What are the alternatives to pipe conveyance? Over the past ten to fifteen years there has been a renaissance in stormwater management around the world that is slowly being taken up in New Zealand. Under this model, stormwater is viewed as a liquid asset, with uses ranging from habitat creation to rainwater harvesting to reduce demand for water supply (Figure 3). Instead of being buried out of sight, stormwater is increasingly conveyed and treated above ground in sustainable urban drainage systems (SUDS). SUDS can be incorporated into parks, along roadsides and in car parks as ‘water feature’ landscape elements. The number and choice depends on the type and age of development (i.e. new vs retrofitting); the available space; land use; land value; public perceptions; funding; accessibility; planning regulation; and intended function. SUDS can be designed for both water quantity and quality control. They range from small source or site control devices (e.g. bio-retention in grassed ditches [swales] and raingardens, permeable paving, green-roofs and filters) to larger catchment control or end-of-pipe devices (e.g. constructed wet detention ponds and wetlands). Table 2 gives four examples of SUDS currently used in New Zealand and Figure 4 shows a raingarden from Waitakere City. The primary purpose is to limit the impacts of urbanization on the wider environment, including receiving waters by reducing flow volumes, attenuating flow and removing contaminants. Their ability to function depends on the design (e.g. size with respect to drainage area, vegetation present, substrate or filter media type); the catchment conditions (climate and hydrology); the contaminant source; construction workmanship and regular maintenance. Hence the great range of efficiencies reported in Table 2. In addition to SUDS, careful urban planning can be used to minimise imperviousness and therefore reduce runoff. Water quality can be improved by changes in other urban activities such as increased use of public transport and compact housing. Use of SUDS along with sustainable urban design is variously known as low impact design (LID, United States, e.g. US EPA, 2000); water sensitive urban design (WSUD, Australia, e.g. Lloyd et al., 2002); and low impact urban design and development (LIUDD, New Zealand, van Roon, 2007, van Roon and van Roon, 2005). Within a water management perspective, LID seeks to restore or maintain the natural

27


NZ

science teacher

increase permeability reduce food risk reduce erosion maintain low flows

water

119

maintain stream temperature pollution prevention water treatment

biodiversity (habitat creation) aesthetic value improve urban climate provide recreation water supply (harvesting)

Figure 3: Sustainable urban drainage triangle showing the three functions of SUDS for stormwater management (after Campbell et al., 2004). hydrological process of the pre-urban stream network at the same time as providing a pleasant and healthy living environment. For instance, following LID principles, SUDS are often planted with native vegetation to bring natural habitats back into cities. The combination of urban planning and SUDS can result in infrastructure that is multifunctional or co-beneficial in that it can reduce water demand and flood risk, improve storm and wastewater quality, and provide public amenities and urban blue-green spaces. To give a simple small-scale example of LID, installation of planted traffic islands can double as raingardens that reduce surface flows and improve water quality at the same time as calming traffic which reduces the risk of road accidents. In the wider sense, LID implies integrated management of the urban ‘three waters’ – water supply, wastewater and stormwater – along with other public amenities and services including living and working spaces, transport, education, solid waste management, energy production and so on. It has even been touted as an integral part of ‘Crime Prevention Through Environmental Design’ programmes (e.g. urban renewal

Figure 4: Is it a garden with native vegetation or is it a stormwater treatment device? Raingardens like this one in Waitkere City are becoming a popular form of SUDS in some New Zealand cities for treatment of runoff from car parks and roofs. in Tamaki by Housing New Zealand; Bracey et al., 2008). Examples of LID from around New Zealand can be found at the LIUDD Case Study Portal: http://cs.synergine.com/ While there has been encouragement from various authorities at the territorial, district and regional government levels including research and education programmes, funding opportunities and design competitions, the pickup rate of SUDS and LID in New Zealand has been slow. The experiences in New Zealand are similar to those in Australia. A study of the transition of stormwater management in Melbourne, which is arguably the most advanced example of sustainable urban water management in Australasia, has identified a number of steps which covered a forty year period (Brown and Clarke, 2007): 1. Seeds for Change: Recognition of a problem is met by rapidly growing public activism and media campaigns. Local government and scientists take an interest in the problem which leads to strategic responses by policy makers. 2. Building Knowledge Relationships: The sociopolitical shift leads to communication and new institutional working space between stakeholders and the formation of relationships which foster

Table 2: Examples of four SUDS used in New Zealand (from Semadeni-Davies, 2008). The range of removal efficiencies reflects the variety of designs, sizes and catchment conditions. Type

Description

Intended Function Water quantity

28

Water Quality

Treatment efficiency (percentage contaminant removal)

Flood risk management (storage and attenuation) Flood risk management (storage and attenuation)

Site and catchment control for water treatment (settling)

Sediments and particulates: 50–90% Dissolved metals: 20–80%

Site and catchment control for settling, entrapment and bio-uptake of contaminants

Sediments and particulates: 50–90% Dissolved metals: 20–80%

Engineered gardens underlain by porous media

Flow reduction due to infiltration and, sometimes, deep percolation

Source control for filtration and bioretention of runoff from small areas (e.g. car parks)

Sediments and particulates: 50–90% Dissolved metals: 20–95%

Engineered vegetated ditches often found alongside major roads and highways

Water conveyance and infiltration

Source control settling, entrapment and bio-retention

Sediments and particulates: 50–90% Dissolved metals: 5–90%

Detention ponds

Permanent pools of water for temporary water storage

Wetlands

Similar to ponds, have vegetated banks, baffles and/or islands. Often preferred over ponds for amenity value

Raingardens

Swales


Into the future Stormwater management in New Zealand is undergoing a change for the better. The main driver of this change is water quality control to safeguard receiving environments, particularly coastal waters. Adoption of SUDS and LID can integrate stormwater into the cityscape at the same time as providing facilities for flow control and water treatment. Growing public awareness and legislation such as the Resource Management Act (1991) have gone some way into bringing about change; however, it is slow in coming and there remain significant hurdles which need to be overcome before SUDS and LID become the standard for urban design. It will be interesting to see whether other urban changes such as population growth and intensification of land use, along

with the impacts of global changes such as economics and climate change, will become tomorrow’s drivers towards a more sustainable future. For further information contact a.davies@niwa.co.nz About the author: Annette Semadeni-Davies is a stormwater engineer at NIWA in Auckland. She holds a doctorate from Lund University, Sweden, where she researched cold regions urban drainage and climate change impact assessment for urban drainage systems. She returned to NZ in 2006 after twelve years in Scandinavia. Her current work includes modelling and analysis of urban water quality. She has published a number of papers in international peer review journals.

NZ

science teacher 119

water

innovation and development of new activities and technologies. 3. Niche Formation: Strong and active connection between key stakeholders, technological research and developers. Rapidly emerging science leads to increased innovation. Practical demonstration of new activities and technologies. Targets and guidelines set by local government incorporate activities and technologies into policy. Take-up encouraged by funding opportunities. 4. Niche Stabilisation: The niche attracts mainstream institutional legitimacy. Incentives are provided including funding and offset schemes. Forums for dissemination (e.g. conferences, training) are launched. Assessment tools are provided for designers, planners and regulators. Innovation enforced by introduction of new regulatory requirements. The transition in Melbourne was prompted by a number of events including poor water quality in city streams, and a proposal in 1967 to discharge effluent from a wastewater treatment plant into the sensitive receiving environment of Port Phillip Bay. A growing environmental awareness and a successful media campaign by The Age newspaper enabled a public push for change which was taken up by policy makers. Over recent years, drought in Australia has also become a major driver, particularly for water harvesting. Brown and Clark (2007) further state that transition required a network of ‘champions who are able to work within the scientific, social, industrial and institutional spheres as agents of change. These champions had the vision, personality and determination to drive through the inertia to change. Moreover, the social context must be ripe for change with – amongst other factors – opportunities to demonstrate benefits, obtain funding, bridge organisations and set targets. Barriers to change they identified included a lack of social mechanisms for change, political and institutional focus of shortterm economic goals, scepticism amongst practitioners and the logistics involved in bringing together different sectors for co-operative planning (e.g. traffic, town planning, water supply, waste and stormwater management). These barriers are similar to those found in New Zealand (e.g., Heijs and Kettle, 2008).

Acknowledgements Thank you to my colleagues Guy Coulter and Caroline Leersnyder for proofreading and editing.

References Bibby, R.L., & Webster-Brown, J.G. (2005). Characterisation of urban catchment suspended particulate matter (Auckland region, New Zealand). A comparison with non-urban SPM. Science of the Total Environment, 343, 177-197. Bibby, R.L., & Webster-Brown, J.G. (2006). Trace metal absorption onto urban stream suspended particulate matter (Auckland region, New Zealand). Applied Geochemistry, 21, 1135-1151. Bracey, S., Scott, K., & Simcock, R. (2008) Important lessons applying low impact urban design: Talbot Park. NZWWA Stormwater 08 Conference. Brown, R., & Clarke, J. (2007). The transition towards Water Sensitive Urban Design: The Story of Melbourne, Australia, Report of the Facility for Advancing Water Biofiltration, Monash University, Melbourne: http://www. arts.monash.edu.au/ges/research/nuwgp/pdf/final-transition-docrbrown-29may07.pdf Bush, G. (1998) Online ‘History of Auckland City’ Available at: http://www. aucklandcity.govt.nz/auckland/introduction/bush/Default.asp Campbell, N., D’Arcy, B., Frost, A., Novotny, V., & Sansom, A. (2004) Diffuse Pollution: An introduction to the problems and solutions, IWA Publishing, UK. Dempsey, B.A., Tai, Y.L., & Harrison, S.G. (1993). Mobilization and removal of contaminants associated with urban dust and dirt. Water Science and Technology, 28(3-5), 225–230. Heijs, J., & Kettle, D. (2008) Low impact design in the Long Bay structure plan; what happened? NZWWA Stormwater 08 Conference Lloyd, S.D., Wong, T.H.F., & Chesterfield, C.J. (2002) Water Sensitive Urban Design - a stormwater management perspective. Cooperative Research Centre for Catchment Hydrology, Industry Report 02/10, September 2002. Makepeace, D., Smith, D., & Stanley, S. (1995). Urban stormwater quality: Summary of contaminant data. Critical Reviews in Environmental Science and Technology, 25(2), 93-139. Sansalone, J.J., & Buchberger, S.G. (1997) Partitioning and first flush of metals in urban roadway storm water. ASCE J. of Environmental Engineering, 123(2), 134-143. Semadeni-Davies, A. (2008) C-CALM review of removal efficiencies for stormwater treatment options in New Zealand. Unpublished manuscript prepared for Landcare Research Ltd. Timperley, M., Williamson, B., Mills, G., Horne, B., & Hasan, M.Q. (2005) Sources and loads of metals in urban stormwater. Auckland Regional Council. Technical Publication No. ARC04104. June 2005 AKL2004-07 US EPA (2000) Low Impact Development (LID) A Literature Review. US Environmental Protection Agency: EPA-841-B-00-005 Van Roon, M. (2007) Water localisation and reclamation: Steps towards low impact urban design and development. Journal of Environmental Management, 83, 437-447. van Roon, M. R., & van Roon, H. T. (2005). Low Impact Urban Design and Development Principles for Assessment of Planning, Policy and Development Outcomes. A Working Paper of the Centre for Urban Ecosystem Sustainability, a partnership between the University of Auckland and Landcare Research Ltd. April, 2005 Wentworth, C.K. (1922) A scale of grade and class terms of clastic sediments. J. Geol., 30, 377-392. Williamson, R.B., & Morrisey, D.J. (2000) Stormwater contamination of urban estuaries: 1. Predicting the build-up of heavy metals in sediments. Estuaries, 23, 56-66.

more census of marine life projects FMAP (Future of Marine Animal Populations): FMAP attempts to describe and synthesize globally changing patterns of species abundance, distribution, and diversity; and to model the effects of fishing, climate change and other key variables on those patterns. This work is done across ocean realms and with an emphasis on understanding past changes and predicting future scenarios.

CReefs (Census of Coral Reefs): An international cooperative effort to increase tropical taxonomic expertise, conduct a taxonomically diversified global census of coral reef ecosystems, and improve access to and unify coral reef ecosystem information scattered throughout the globe.

29


NZ

science teacher

internationaleducationcomment

119

using socio-scientific issues in the classroom: opportunities and challenges Pupils value discussions about topical scientific issues which bring ethics and values into our science classes as Mary Ratcliffe, Professor of Science Education, School of Education, University of Southampton explains: Introduction Curriculum reform in England, like that in many countries, has focused on development of ‘scientific literacy’ for all pupils (e.g. Millar & Osborne, 1998; Millar, 2006). For many teachers ‘scientific literacy’ represents a shift from concentration mostly on pupils’ conceptual understanding of science towards understanding of the processes and practices of science, and the social and ethical implications. Socio-scientific issues represent a useful context for developing this focus. This article looks at the opportunities and challenges in using socioscientific issues in the classroom. Socio-scientific issues are multi-faceted. They… • have a basis in science, frequently that are at the frontiers of scientific knowledge • involve forming opinions, making choices at personal or societal level • are frequently media-reported, with attendant issues of presentation based on the purposes of the communicator • deal with incomplete information because of conflicting/incomplete scientific evidence; inevitably incomplete reporting • involve values and ethical reasoning • may require some understanding of probability and risk. (Ratcliffe & Grace, 2003, p. 2). Rather than consider socio-scientific issues in general,

I will exemplify some of the uses in the classroom by considering the outcomes of two projects – the first focusing on a cross-curricular day on genetic engineering; the second on teachers’ actions as they focus on teaching specific processes and practices of science using socio-scientific issues.

Cross-curricular event on genetic engineering The sequence of pictures in Figure 1 show PowerPoint slides that a small group of 14-year-old pupils produced as a result of an intensive day’s work on genetic engineering. They were asked to produce a short presentation on the pros and cons of human genetic engineering as an outcome of the day. You can make your own judgement of the summary as a piece of work from 14-year-olds who are average achievers in science. There are clearly flaws in the science and limitations in the ethical arguments – the presentation raises questions about what we, as teachers, want to achieve by using socio-scientific issues, and also what pupils gain from their discussion. In this particular case, the intention of the day was to enable young people to not only learn some concepts of genetics, but have opportunities to consider ethical issues and to present and evaluate personal perspectives. It is these latter two that are unfamiliar to many science teachers. Thus the project, from which the pupils’ work is taken, focused on the usefulness of a cross-curricular day in which science and humanities teachers worked together. The intention was that teachers would share complementary expertise – humanities teachers learning more about the science concepts and their teaching; science teachers learning more about the processes of supporting open discussions in which there is a variety of value positions.

Figure 1: Images produced by 14 year old pupils as a result of an intensive day’s work on genetic engineering.

30

Courtesy of Mary Ratcliffe.


Table 1: Structure of a cross-curricular day on genetic engineering Purpose

Introduction – team building

Team working: identification of pupils’ initial views of genetic issues Pupils’ reactions to a human dilemma involving genetic disorders Identify scientific aspects, the individual view and the wider societal impact Understanding of genes, genetic crosses, genetic engineering

Stimulus (keynote speaker or video)

Science – What is possible? (science activity) Viewpoints – Genetic testing (group discussion)

How should we decide? (ethical analysis) Can we? Should we? Our views (debate, posters, role play, presentations etc.)

In practice, pupils were considered to benefit considerably from a cross-curricular day on a socioscientific issue (Ratcliffe, Harris & McWhirter, 2004). The structure of the day is shown in Table 1 that allowed pupils to consider the science, ethics and values of genetic engineering. Teachers were very positive about the motivational effects. They saw the key elements of a day focused on one socio-scientific issue as: • studying one issue in-depth that was of intrinsic interest • using a novel, thought-provoking stimulus to start the event • providing opportunities for pupils to voice and share their opinions • creation by pupils of a tangible product or outcome • providing opportunities for pupils to work in teams • using ethical analysis tools to allow pupils to discuss and debate moral issues. Similarly, pupils reported that they gained most from studying a socio-scientific issue in depth when: they had the opportunity to discuss social issues relating to genetic engineering; there was an opportunity to work in teams to solve problems or create a tangible product; and there was suitable use made of external

Identification of individual views To recognise the difficulties in where we draw the line – the slippery slope argument Understanding of principles of ethical decision making – goals, rights and responsibilities ethical analysis To synthesise and present arguments related to what is possible and how we decide

input. For the pupils who produced the PowerPoint slides (shown above), it was a positive experience in examining evidence, exploring different viewpoints and engaging in teamwork. In general, though, pupils would have liked a far greater emphasis on participatory interactive learning styles (Harris & Ratcliffe, 2005). Giving pupils the opportunity to look at a socio-scientific issue in depth, including examination of scientific evidence of what can be done and ethical consideration of what should be done, allows them to develop skills inherent in scientific literacy. However, the evidence from observations and interviews in this particular project showed that teachers – both science and humanities – had not always been able to demonstrate real expertise in supporting values-based discussions and pupils’ consideration of scientific evidence. Teachers tended to focus on the outcomes of learning – the genetic ‘facts’ or the result of a debate or discussion – rather than supporting pupils in understanding the processes of informed debate, consideration of different value positions and evaluation of evidence.

NZ

science teacher 119

internationaleducationcomment

Activity

Development of teaching strategies To realise the aims of scientific literacy requires skilled science teachers who are confident with both leading pupils to an understanding of consensual scientific

Box 1 – GRR GOALS are something we aim for; they are the consequences we want. In one way of thinking, a ‘good’ outcome may be judged morally correct regardless of how the goal was achieved. RIGHTS are things that are due to us. A legal right is to be able to vote when we are 18. As a human right, a child can expect to be cared for by his family. We are said to have a right if we are entitled to a certain kind of treatment, no matter what the consequences. RESPONSIBILITIES are the things we owe others – to tell the truth, to keep a promise and to help a friend, for example. Usually, we justify responsibilities by suggesting that sticking to them will achieve a worthy goal or that they are required because of someone’s rights.

Box 2 Case study – ethical analysis It is estimated that one person in 25 in the UK is an unaffected carrier for cystic fibrosis. Anne Thackray is a carrier for cystic fibrosis. David, her husband, is also a carrier, resulting in a 1 in 4 chance that each child they have has cystic fibrosis and a 1 in 2 chance that each child they have will be a carrier. Anne and David have three daughters. Emily has cystic fibrosis. Abigail is a carrier for cystic fibrosis. Lucy neither has, nor is a carrier for, cystic fibrosis. Emily knows that gene therapy may be a possible treatment for cystic fibrosis in the near future. She is very keen to have that treatment. David, her father, is not happy with gene therapy. He thinks that tampering with genes is against nature - he and Anne have cared for Emily and will continue to do so. Lucy, Emily’s sister, would like Emily to be cured but is worried that gene therapy is experimental and may have side effects. Should Emily have gene therapy if (or when) it becomes available? Each small group of pupils identifies the goals, rights and responsibilities of one of the members of the family in this case. Other people, such as a genetic counsellor, hospital finance manager etc. can also be considered. The teacher then summarises, through class discussion, in a table showing the perspectives for each person considered. Each group of pupils can then consider their response to the dilemma, identifying the values they consider important and the scientific evidence they have used to reach their position.

31


NZ

science teacher

Knowledge and Understanding of the Nature of Science Teacher is anxious about their understanding

internationaleducationcomment

119

32

Confident that they have a sufficient understand of NOS

Conception of Their Own Role Dispenser of knowledge

Facilitator of learning Use of Discourse

Closed and authoritative

Open and dialogic Conception of Learning goals

Limited to knowledge gains

Includes the development of reasoning skills The Nature of Classroom Activities

Student activities are contrived & inauthentic

Activities are owned by students and authentic

Figure 2: Five Dimensions of Practice in teaching the nature and conduct of science (from Bartholomew, Osborne & Ratcliffe, 2004). knowledge (the recognised body of science facts and concepts), and considering the evidence and impact of science-in-the-making. Historically, as science teachers we have been used to enabling pupils to understand accepted scientific knowledge – the correct science. (How many times have you sought to get the ‘correct answer’ when there is variation in results from the class experiment seeking to demonstrate reactivity series/composition of air/g ....[fill in your pet problematic experiment]?). To now add to that a need to deal with contested science and the different value positions of the societal impact, can result in a conflict of the purpose of science lessons for teachers and pupils. Characterising the intentions of lessons for scientific literacy as ‘how we know’ rather than ‘what we know’ might be a helpful focus for both teachers and pupils. Using appropriate and structured strategies with socio-scientific issues can support this shift in focus. As an illustration, consider a teacher using, for example, the context of cystic fibrosis to teach about genetic inheritance. During the lesson she is aware that pupils want to discuss the human impact of the disease – indeed pupils may know of sufferers of cystic fibrosis, the restrictions on their actions and expected lifespan. However, she is unclear where any pupil-initiated discussion might lead, and is reluctant to open up the issue to look at the social and personal impact. Her view that science is value-free and that values should not be part of a science lesson is held by many science teachers (Levinson & Turner, 2001) – but is itself a value position! A clear structure of goals, rights and responsibilities (GRR – refer Box 1) is one strategy that can be adopted to support consideration of both ethics and evidence (Refer Box 2). In this particular case study, if pupils are just asked: should Emily have gene therapy? they are likely to answer simply from their own ‘prejudices’ and without consideration of ethics and scientific evidence. By providing a structure of examining the goals, rights and responsibilities of a particular person and then showing the views across the class, the conflicts between different positions becomes apparent ‘objectively.’ This GRR structure does not answer the question, but allows consideration of values, ethics and the impact of scientific evidence in a systematic (and fairly quick) way. Pupils are then able to consider their position from the range of views and evidence presented. Use of the GRR structure has encouraged science teachers to engage in ethical discussion and support the consideration of scientific evidence. One project has examined the challenges for teachers in shifting from a focus on ‘what we know’ to ‘how we know’ – the teaching of specific ideas-aboutscience explicitly, including making decisions about scientific issues (Bartholomew, Osborne & Ratcliffe, 2004).

We found that some teachers were more successful than others in engaging pupils effectively and that this depending on a number of characteristics of their practice – Figure 2 shows these five dimensions of practice. Teachers’ understanding of the science of the issue and how evidence is generated and evaluated (‘how science works’) is just one part of effective teaching. Of greater significance is how they approach their role, particularly the types of pupils’ discussion they support and the skills they aim to develop. Teachers who tended to show characteristics towards the right of these inter-related dimensions (Figure 2) were those who were more effective in engaging pupils with the issue under consideration. Enabling science teachers to reflect on these five dimensions of practice to adapt their teaching style requires supportive professional development.

Conclusion Changing teaching style and strategies is not easy, but is probably necessary to enable pupils to become scientifically literate from their experiences of science lessons. The overwhelming evidence from pupils is that they value discussions of topical scientific issues and the topicality increases the relevance of their science education. In England, the science curriculum for 11-16 year-olds now has a far stronger focus on the processes and practices of science and consideration of socioscientific issues (‘how science works’). The change in the curriculum by itself does not change practice. In order to support effective implementation of new courses, attention has been given to appropriate professional development. Building on the issues raised in this article and wider concerns about effective support for teachers (e.g. Adey, 2004), a national network of science learning centres: www.sciencelearningcentres.org.uk is providing professional support for teachers as they consider the opportunities and challenges of the changing science curriculum. For further information contact: m.ratcliffe@soton.ac.uk

References Adey, P., with Landau, N., Hewitt, G., & Hewitt, J. (2004). The Professional Development of Teachers: Practice and Theory. Dordrecht, Kluwer. Bartholomew, H., Osborne, J., & Ratcliffe, M. (2004). Teaching students ‘ideasabout-science’: Five dimensions of effective practice. Science Education, 88, 655-682. Harris, R., & Ratcliffe, M. (2005). Socio-scientific issues and the quality of talk – what can be learned from schools involved in a ‘collapsed day’ project. The Curriculum Journal, 16, 439-453. Levinson, R., & Turner, S. (2001). Valuable Lessons: Engaging with the social context of science in schools. London: The Wellcome Trust. Millar, R. (2006). Twenty First Century Science: Insights from the design and implementation of a scientific literacy approach in school science. International Journal of Science Education, 28 (13), 1499-1521. Millar, R., & Osborne, J. (eds) (1998). Beyond 2000: Science Education for the future. London: King’s College. Ratcliffe, M., & Grace, M. (2003) Science Education for Citizenship Buckingham. Open University Press. Ratcliffe, M., Harris, R., & McWhirter, J. (2004). Teaching ethical aspects of science - is cross-curricular collaboration the answer? School Science Review, 86, (315), 39-44.


NZ

science teacher 119

What do our New Zealand students experience in school science as they learn about scientific inquiry? This article is based on two classroombased case studies in 2005, as Anne Hume, Waikato University explains: Context for the Case Studies In the context of a national science curriculum (Ministry of Education, 1993) that sought to promote students’ engagement in authentic inquiry, my case studies involved Year 11 science classes where students (15–16 year olds) were learning how to perform investigations for Science Achievement Standard 1.1 Carrying out a practical investigation with direction (SAS 1.1) towards their National Certificate of Educational Achievement (NCEA). The Year 11 context was chosen because for many of our students this is their last opportunity for formal schooling in science, and likely to be a time when they form lasting impressions of the nature of scientific inquiry. These ideas and beliefs could have implications for their scientific literacy as future citizens, in terms of the extent to which they understand and appreciate the ways scientists work to produce scientific evidence, solve problems and build knowledge. My case studies were set in two large New Zealand secondary schools, River Valley Boys’ High School and Mountain View High School (pseudonyms). Both school populations were similar in that they were predominantly of New Zealand European ethnicity (77% and 68% respectively) and each had 12% Maori. Mountain View also had a significant proportion of Asian students (15%). Each case study involved a female teacher, and four to five Year 11 students (15-16 year olds) who were studying SAS 1.1 towards their NCEA qualification. The students at each school were in classes representing a very broad band of mid-range of abilities − approximately 80% of the whole Year 11 cohort. The remaining 20% were streamed into two classes of high and low ability respectively. At River Valley, Jenny (pseudonym) the teacher held a Master’s Degree in genetics and was in her eighth year of teaching. At Mountain View, the teacher Kathy (pseudonym) had begun her teaching career three years earlier after completing a conjoint Bachelor’s Degree in science and teaching. My three research questions were: • What science are New Zealand science students learning in NCEA classroom programmes for SAS 1.1? • Why and how are New Zealand science students learning the science they learn in NCEA classroom programmes for SAS 1.1? • What match is there between the intended curricula (i.e. those of the SiNZC and the teacher) and the operational science curricula (i.e. those experienced by New Zealand science students)?

Classroom sessions: an overview At both schools many decisions to do with classroom practice were not made by the individual teachers, but were made collectively at departmental level in the form of departmental guidelines. These guidelines were based on recommendations, including exemplary materials, from the New Zealand Qualifications Authority (NZQA) which departments and classroom teachers were obligated to follow under school accreditation

requirements. Thus, at both schools the content of departmental guidelines was very similar, and both case study teachers adhered closely to departmental guidelines in their teaching and learning programmes. At River Valley, the teaching and learning took place during twelve one-hour lessons over a three-week period, late in term one of a four-term year. In contrast, students at Mountain View experienced a staggered teaching and learning programme, eleven hours in total. Their teaching started with five one-hour lessons late in the first term, followed by a three-week break before another four consecutive lessons early in the second term. Two weeks later, Mountain View students attended a single timetabled session (two hours) within the school’s mid-year internal exam programme where they underwent the formal assessment for SAS 1.1. Despite the variation in the overall timing and duration of the teaching and learning sessions at the two schools, the sequence of lessons in both schools showed strong parallels. Each sequence could be divided into three distinct phases: the preparatory phase (instructional sessions); the practice phase (called the ‘Formative Assessment’ by the teachers in these studies) and the formal assessment phase (the summative assessment).

educationresearch

nature of scientific inquiry inYear 11 science

Preparatory phase In this first phase, students in both classrooms were introduced to the requirements of SAS 1.1 and key concepts and skills associated with investigating relationships between two variables. Lesson content in these largely instructional sessions focused on: terms, definitions and procedures to do with fair testing: specific skills such as making observations and measuring, tabulation and averaging of data, plotting graphs and the planning and reporting of fair tests using templates and how to meet the assessment requirements of SAS 1.1 as depicted in assessment schedule exemplars. Less time was devoted to the first phase at River Valley (three lessons, compared to five at Mountain View), and Jenny also revised specific science concepts that featured in the investigation her students were to perform in phase two (rates of chemical reaction and preparation of solutions of given concentration by dilution).

Practice phase In the second phase, students at both schools participated in a mock assessment known as the ‘formative assessment,’ designed to give students practice at performing a whole investigation under testlike conditions. Again, there were many commonalities between the two case studies: • the mock assessment took place over four lessons, with each lesson covering in turn: the planning; data collecting; reporting; and feedback stages of the investigation • the science context for the investigations was the same (both teachers used the same exemplar materials for investigating the effect of factors such as temperature or concentration on the rate of reaction between magnesium metal and hydrochloric acid) • students worked in teams of four for planning and data gathering, but as individuals for the reporting • the format, timing and reporting requirements of the mock assessment activity closely matched those of

33


NZ

Table 1. The teachers’ intended learning at River Valley and Mountain High

science teacher

educationresearch

119

Concepts

Skills

Procedural Knowledge

Fair tests Purpose of an investigation as an aim, testable question, hypothesis or prediction Variables - key, dependent and independent Primary and secondary data, qualitative and quantitative data, reliability of data Tables as a systematic format for recording data Graph types (bar and line); graph components such as title, x (independent variable) and y (dependent variable) axes, units and values for axes, plotted points, and lines of best fit Sources of error and systematic errors Equipment names, types and purpose Background/contextual science concepts to the investigation e.g., factors affecting rate of reaction and behaviour of pendulums.

Designing, evaluating, modifying and carrying out a systematic plan for a fair test Determining the purpose of a fair test investigation Identifying, controlling, changing, observing and measuring variables Choosing and using equipment appropriately Determining appropriate range of values for variables Repeating experiments Recording and processing data – tabulating, averaging, graphing Interpreting data, and recognising trends and patterns Discussing findings, linking findings to existing science ideas and drawing conclusions in a written report Evaluating the investigation in the written report (sources of error, improvements).

Knowing how to plan a workable, fair test Knowing that planning requires trialing, evaluating and modifying Knowing why reliable data is needed and how to obtain consistent data Knowing that the findings should be linked to science ideas Knowing how to work as a team Knowing how to interpret the template and assessment schedule requirements of tasks for the internal Science A.S. 1.1 at achievement, merit, and excellence levels.

*At River Valley Jenny added ‘Good science’ (the science that real scientists do), and ‘school science’ (the portrayal or simulation of science experienced by students in school); systematic errors; and the concept of controls *At Mountain High Kathy provided an experimental plan which included an aim, list of equipment and an experimental method, a format for scientific reports and coverage of the relationship between two quantities when change in one causes change in the other.

*At River Valley Jenny also included some trialing of plans

*At River Valley Jenny also dealt with how to recognise and account for errors in measurement; and recognising that the planning and carrying out of investigations required for Science A.S. 1.1 more closely resembles ‘good science’, than most ‘school science’ *At Mountain High Kathy added knowing that the findings should be linked to the science behind the investigation; and knowing when assumptions can be made and the limitations of those assumptions

the summative assessment in phase three teacher direction was highly evident, including extensive and targeted feedback for students related to the assessment schedules for the task. In addition, at River Valley students initially peer assessed each other’s reports using a common assessment schedule and provided verbal feedback to one another before the teacher provided global feedback to the class. •

Formal assessment phase

34

In the third phase for their formal assessment, known as the ‘summative assessment,’ students again performed fair test investigations in groups along similar lines to the practice investigation in the second phase. They initially planned as individuals, then collaborated as a group to produce a single plan and obtain data, and finally wrote up the reports individually. The planning and reporting templates were virtually identical in the two schools, however, the science contexts for the investigations were different. Students at Mountain View performed their investigation in the context of reaction rates again, this time the relationship between surface area and the rate of reaction, while students at River Valley performed their investigation in the context of pendulums which they had no prior experience of in the course. Students at Mountain View planned and executed their investigation with relative ease, whereas my study group at River Valley experienced difficulties carrying out their plan

investigating the relationship between the length of a pendulum and its period, that is, the time taken to complete a full swing. They were unable to operate the pendulum successfully, and consequently could not record sufficient data. However, they were very savvy of assessment techniques, and showed adeptness at ‘playing the system’ as the following excerpt shows: Within the closing stage of the practical session the group scrambled to complete and record sufficient runs for their data processing and interpreting phase. The four group members frequently interchanged roles as they each took it in turn to record their own copy of the results (which they needed for the write-up in the following session). All other groups had finished their data collection and were listening as Jenny covered points for the write-up. Martyn, Peter, Mitchell and Eddie continued operating their pendulum and consequently missed hearing what Jenny was saying during her briefing. In their rush to finish, confusion set in, “Is this the third or fourth one?” asked Mathew who was recording and calculating. When the pendulum continued to collide with the support arm, Peter commented, “You’ll have to estimate,” while Eddie was convinced they should “make up the rest.” Mitchell agreed, “Let’s make up the rest, and take sixteen seconds as the average,” and Martyn confirmed, “It will still give us our results.” Each group member had a complete set of written data by the end of the practical. Jenny allowed the class to view the background science notes (a set of notes explaining the science concepts and terms related to the pendulum) prior


Table 2. Key influences on Why and How students learned The Pedagogical Approaches, Strategies and Capabilities of their Teachers Departmental guidelines produced many commonalities in the pedagogical strategies teachers employed - they effectively decided the: manner in which the teaching and learning programmes were to be delivered and assessed; timing of the programme delivery and; adoption of the planning template and exemplar assessment tasks and schedules. As a result teachers’ pedagogical approaches were predominantly didactic in nature. Students identified particular common teaching strategies that helped their learning, including: provision of the opportunity to do practice investigations and write-ups for assessments in groups: direct instruction from knowledgeable teachers; provision of a planning template and assessment schedules and; feedback they received from teachers and fellow students after assessments. Convergent formative assessment practice underpinned why and how students were succeeding in many aspects of their learning. Explicit sharing of learning goals, success criteria and learning progress with students was achieved via the use of exemplars. The timing of the teaching and assessment early in the school year appeared to limit students’ opportunities to consolidate and improve their learning in a wide range of contexts, and to develop the tacit, intuitive knowledge required for effective investigating in science. The teaching decision to set both the formative and summative investigations in the same familiar science context possibly gave students at Mountain View the opportunity to make meaningful links with their new experiences more readily than students at River Valley , where the background science in the summative assessment was unfamiliar to students and they had had little exposure to the phenomenon being investigated.

to the end of the period, but before collecting in all papers to retain overnight. At the last minute, the students resorted to recording their remaining results from non-existent data, and then used these fabricated results to complete the reporting section of the assessment. Another significant difference between the case studies is that, unlike the students at River Valley, the five students in the Mountain View study did not work in the same groups for the summative investigation. Kathy purposefully decided groupings for the summative assessment at Mountain High on the basis of results from the formative assessment, so that each group intentionally had at least one student who had demonstrated advanced investigative capabilities.

The Learning Strategies that Students Employed Students often played a mediating role in their learning, at times consciously choosing when and how to engage from a range of personally preferred learning strategies. Learning choices were often related to perceptions students had about what was valuable or important to learn and who was best suited to assist their learning at given times, and feelings of self-esteem and self-confidence: - NCEA was an important personal goal for most students, and they were prepared to learn what was required of them in order to demonstrate achievement of the standard at particular levels of attainment. - high value was placed on being able to work and collaborate with peers – students appreciated the convenience and ease of sharing knowledge and expertise to problem solve, and to clarify misconceptions and/or confirm understanding in the relatively safe forum of pairs/small groups of students. They realised some interactions between peers could also be detrimental to learning, and lack of effective teamwork was seen to compromise intended learning work on at least one occasion. - students were ambivalent about the value of peer assessment in promoting and facilitating their learning, generally because they questioned the credibility and capability of their peers to assess as accurately as their teachers. While it was difficult to judge individual students’ capabilities on the basis of negotiated group plans, the collaborative planning process tended to give more group members the potential to secure relevant and reliable data, and in turn the chance to process and interpret data, draw conclusions and evaluate their findings.

NZ

119

educationresearch

The Content of the Teachers’ Intended Curricula Teachers delivered content in the teaching and learning programmes specifically targeted at fair testing and the assessment requirements of SAS 1.1 Teachers’ decisions about lesson content were governed by their respective school departmental guidelines for delivering the SAS 1.1 - all teachers in the departments were obliged to follow these guidelines. Departmental guidelines were similar in each school since each school looked to materials provided by government agencies to support learning programmes for the SAS 1.1 i.e., planning templates, and exemplar assessment tasks and schedules. * The exposure of students at River Valley to the notions of ‘good science’ as opposed to ‘school science’ in their learning, probably stemmed from their teacher’s own knowledge base and beliefs about the nature of scientific investigation and her personal experience of scientific research.

science teacher

What were students learning about scientific inquiry? Findings from both case studies indicated that the learning students were achieving closely matched that which their teachers intended them to learn. The content of the teachers’ intended curricula is summarised in Table 1, and represent a synthesis drawn from data collected during teacher interviews, observation of classroom lessons, departmental guidelines and notes, and student workbook and text (Refer Cooper, Hume & Abbott, 2002; Hannay et al., 2002).

Why and how were students learning? Interviewing the students and their teachers, observing them interact in class, and examining support materials and student records revealed why and how

35


NZ

science teacher

educationresearch

119

36

students learned about fair testing and the assessment requirements of the SAS 1.1 were direct consequences of three influences: the content of their teachers’ intended curricula; the pedagogical approaches and techniques that their teachers used; and the learning strategies that students employed. The key findings are summarised in Table 2.

Conclusions and implications This study sought to gain some insights about the possible nature of the student-experienced curriculum as our Year 11 students learn about scientific inquiry from the perspectives of some actual teachers and students in the classroom. By examining what these students were learning about science investigations, my research found that in both case studies their learning appeared to be focused on a narrow view of scientific inquiry, that is, fair testing, and on mastering assessment techniques. Why and how this learning occurred stemmed largely from the strong influence the national qualification NCEA, and its interpretation of the science curriculum, was having on decisions affecting the two classroom programmes. This study supports the observations of Black (2001, 2003) that qualifications are considered high stakes by schools and teachers, and that assessment for qualifications is driving the senior school and classroom programmes in New Zealand. Decisions were made in this study at school and departmental levels, which reflected the importance the two school communities, and professional staff placed on their students achieving success in this qualification, and these decisions directly impacted on the content of classroom curricula and the methods teachers used to deliver that content. The NCEA interpretation of the science curriculum (in the form of SAS 1.1 and supporting materials) and departmental decisions determining time allocation and timing of the science investigation programme in classes influenced the instructional approaches teachers chose to use and the strategies used by students to learn. The structure of the qualification, especially the standardsbased mode of assessment, promoted some aspects of formative assessment practice with teachers employing strategies such as explicit learning goals, exemplars and feedback. However, relatively short teaching and learning programmes before summative decisions were made restricted students’ ability to act on formative assessment information to improve their learning. Consequently, student learning tended to focus on procedures and there was little evidence of the higher order thinking skills linked to creativity, evaluating and self-monitoring of learning. However, in the intervening period since the collection of data for this study, NZQA has made some modifications to SAS 1.1 Carrying out a practical investigation with direction and introduced more flexibility into the standard and support materials. In October 2005, the standard was re-registered with a number of changes, which seem to introduce more recognition of the complexity of scientific investigation into the standard, and give more latitude for teachers to offer students some variety in their approaches to scientific investigation. The revised standard also provides more specific detail about what constitutes ‘quality’ in a scientific investigation. The achievement criteria are more generic than those in the previous form of the standard, and some former aspects of the accompanying explanatory notes have been given increased emphasis, while some have been dropped and new features introduced. For example: • greater specificity is provided about what constitutes a directed investigation.

• the terms practical investigation and quality practical investigation are introduced and defined in detail, reflecting the content of the modified achievement criteria. The terms workable and feasible to describe plans are dropped. • the terms sample and collection of data are introduced, alongside the terms independent and independent variable respectively in the definition of a practical investigation, and sampling and bias as possible factors to consider in data gathering in the description of a quality practical investigation. The inclusion of these terms potentially enables students to use approaches to investigation other than fair testing, but because sampling and bias can have close connotations with fair testing it is possible that fair testing may still prevail in classroom practice unless appropriate exemplary support materials and text are accessible to professional development providers, teachers and students. • validity of method, reliability of data and science ideas are specified as requirements to consider where relevant when evaluating the investigation. These changes signal more acknowledgement of the nature of scientific inquiry in NCEA assessment procedures for SAS 1.1, and possibly greater opportunity for students to experience authentic scientific investigations (that is, the ‘doing of science’ in a manner that mirrors the actual practice of scientific communities. Atkin & Black, 2003) and develop higher order thinking skills. This move should give teachers greater autonomy in designing teaching and learning programmes to meet students’ learning needs and interests. An overview of exemplary material now present on the Ministry of Education (MoE) website for Achievement Standard 1.1 reveals one assessment task linked to the new version of the standard. This assessment resource is based on a pattern-seeking investigation. The resource includes a planning and reporting template and assessment schedule similar in format to the fair testing versions, but with terms relevant to pattern-seeking and the new requirements of the standard. Awareness that school-based decisions that focus too much on meeting administrative, logistical and moderation requirements of high stakes qualifications can have detrimental effects on pedagogy and student learning may hopefully prompt schools to re-evaluate the wisdom of these decisions. Finally, the views and insights that students have given in this study, about the teaching and learning they experienced, and the role they play in these processes, should provide useful information for teachers to reflect on as they evaluate the effectiveness of their teaching and assessment strategies in helping students to achieve quality learning in scientific inquiry. For further information contact annehume@waikato. ac.nz Author’s note: For a more detailed account of the study refer to ’Student Experiences of Carrying out a Practical Science Investigation Under Direction’ by A Hume and R Coll for the International Journal of Science Education DOI: 10.1080/09500690701445052.

References Black, P. (2001). Dreams, strategies and systems: Portraits of assessment past, present and future. Assessment in Education, 8(1), 65-85. Black, P. (2003). Report to the Qualifications Development Group Ministry of Education, New Zealand on the proposals for development of the National Certificate of Educational Achievement. Retrieved March 14, 2003 from http://www.minedu.govt.nz/index.cfm?layout=document&documentid=55 91&data=l Cooper, B., Hume, A., & Abbott, G. (2002). Year 11 science. NCEA level 1 workbook. Hamilton, New Zealand: ABA Books. Hannay, B., Howison, P., & Sayes, M. (2002). Year 11 Science. Study guide. NCEA Level 1 edition. Auckland: ESA Publications. Ministry of Education. (1993). Science in the New Zealand curriculum. Wellington, New Zealand: Learning Media.


NZ

science teacher 119

Eighteenth-century Scottish philosopher David Hume’s celebrated problem of induction creates not a glimmer of light for understanding science, Dr Philip Catton, who teaches History and Philosophy of Science at the University of Canterbury, explains: In my last article, I summarised Hume’s conundrum concerning induction, arguing that it is produced by Hume’s analytical dispositions. I remarked how automatic but misleading it is that analytic philosophy considers only ever one inference at a time in science one scientist at a time. Science in fact involves collective inferencemaking, and marshals warrant for its conceptions at the level of the collective. Science certainly is not simply whatever individual scientists do, times the number of scientists that there are. In other words, science has synthetic qualities, based on trust within communities, and based across those communities on shared and distributed epistemic responsibilities. These are the dimensions that an analytically oriented philosopher such as Hume will inevitably miss. Because science is synthetic, it develops many-sided connections between theory and evidence. Therefore, we do not understand well the link between evidence and theory by reducing it to single inferences. I illustrated this first by examining how we know that we can’t fly a balloon to the moon, and then by examining how we know that all emeralds are green. Hume’s contentions crystallise what is wrong with a purely analytic approach in philosophy to the question of how we learn from experience. Here, I shall enumerate seven further criticisms. In several previous articles I have discussed the significance for science of measurement. From the seven further criticisms of Hume in this article, you will see my message concerning measurement reinforced. Hume had not the least familiarity with the performance of scientific measurement, and if we think about it we can see many ways in which this ignorance of his egregiously weakens his perspective. 1. Hume’s problem concerns simple enumerative induction – a will-o’-the-wisp. The form of inference that, according to Hume’s argument, can never be reasonable, is that of simple enumerative induction. Hume’s problem concerning induction is about licensing in general the following two inference forms: from ‘All observed Fs are G’ (alone, as premise) to the categorical conclusion ‘All Fs are G’ and from ‘X per cent of observed Fs are G’ (alone, as premise) to the categorical conclusion ‘X per cent of Fs are G.’ This problem is, however, wholly inconsequential, for simple enumerative induction is never used, either in everyday life or in science. We never infer from the observations alone; our epistemic situation is always rich with relevant collateral information and other already present theoretical beliefs. Hume is the sort of analytically oriented philosopher who would invite us to consider an induction of the form: ‘swan one is white, swan two is white, (all the way up to), swan fifty-seven is white, therefore all swans are white.’ This sort of example is commonly used in philosophy

of science classes. It is supposed to be a virtue of the example that it is in its every salient characteristic completely set before us. We can therefore go to work on it analytically, and assess whether the inference is rational. If it is in the least way defensible then we will be able to identify, indeed give ourselves, under the analysis, just what the defensibility is. Otherwise we will conclude that it is indefensible. In fact these expectations are naïve, as anyone with the least scientific discernment will readily see. For someone using science would immediately add collateral considerations and discern potential richness to the inference. What we might infer about the colour of all swans from an experienced sample in which all were white, we would infer on the basis of antecedent understandings. Thus we know, for example, about heredity, and consequently, about what might cause the characteristic in question (whiteness) if ever it were endemic to swans to remain so. We are touched for this theoretical reason by the thought that it is at least somewhat plausible that all swans are white. Given the way inheritance works, and the common heritability of surface colour, and the known uniformity of experienced swans so far, it is, we might judge, possible, but hardly certain, that all swans are white. Knowing what we know and seeing the uniformity in the sample, so far we feel a palpable urge to generalise. We are however, easily able to discern why the sample could be as it is without the generalisation being true. So if we generalise we will do so tentatively, with little confidence. By contrast, if we observed that swan one had a heart, we would not need to look any further than that to infer that all swans have hearts. Indeed, if swan one was observed to bleed, we could with almost equal safety infer that all swans have hearts. We discern an impossibility here, from knowing what we do, that any swan could be blooded without them all being blooded, and that any blooded creature could lack a heart. Moreover, in quite the other direction from the ‘all swans are white’ inference, we could consider the case where swan one has a wart on its left eye, swan two has a wart on its left eye … (all the way up to) swan fifty-seven has a wart on its left eye, and thus all observed swans have a wart on their left eye. We know enough about the aetiology of warts to know how foolish it would be to infer from this that all swans have a wart on their left eye. The three swan examples are (as given) formally the same, but there is a world of difference between them. So, so much the worse for hoping to bring all the salient considerations into view by explicit description of an example. The swan examples discussed above illustrate my point that in actual inference-making, especially within the cognitively rich environment of an extant science, mere enumerative induction is never used. Actual inference-making in science proceeds within a social, intellectual and practical context that is rich beyond description. When analytic philosophers set out to consider a single example of inference-making in science that they think that they can completely

historyphilosophyscience

Hume on induction: nonsense on stilts

37


NZ

science teacher

historyphilosophyscience

119

38

describe, they cut past this important point. This is almost always to omit dimensions and qualities that that inference in question in fact would have. The philosophers thereby hurt their understanding of science far more than they help it. 2. Hume contends that his problem about enumerative induction impugns almost the entire sweep of empirical knowledge. Hume argues that unless simple enumerative inductive inference can be licensed, we are without good reason to augment our ways of thinking in any way beyond, on the one hand, the trifling truths of logic, and on the other hand truths about specific empirical matters so far observed. And as is well known, any numbers of very fine analytic philosophers have felt the force of Hume’s concerns about this. Bertrand Russell, for example, admitted that without a solution, which he could see no way to provide, to Hume’s problem of induction, he also could see no way to reason a man who thought himself a poached egg out of that persuasion. (Russell, A History of Western Philosophy, 1972, p. 673.) That is to say, Russell believed that Hume’s problem impugns virtually the entire sweep of our presumed knowledge. Yet over against not only Hume, but also the many analytic philosophers who have followed him, the claim that Hume makes here about simple enumerative induction, is actually nonsense. For, among other things, it is completely straightforward to generate instances of simple enumerative inductions that no sane person would make. We do not as a first step need to license this inference form, and it would imply insanity were we to do so. In order to have the right to claim to possess inductive knowledge, we in fact would not need to license the form of non-deductive argument to which Hume draws our attention. It is true that in order to have the right to claim to possess inductive knowledge various inductive inferences that we make each needs to be, in some way, warranted. But none of these inferences is a simple enumerative induction, and the ways that any two of them are warranted need not be one and the same. Moreover, there is no reason to expect that they should each boil down to a single variety of argument. Rather, they each might synthesise a whole nexus of particular arguments. (In a previous article I have remarked this feature within the development of what I called a “mostmeasured understanding”). We in fact are entitled to re-appropriate the word ‘induction,’ which was misappropriated by adherents to Hume, and apply it to a synthetic style of inference (generally conducted by a community of inquirers) that richly compounds the responsibility of theory to measurement. And unlike enumerative induction, this description of inference is apt to science. Hume’s problem in this way proves entirely irrelevant to science. 3. Hume specifically considers only inferences to generalisations that are of the logically simplest form. The upshot, according to Hume, of his problem concerning simple enumerative induction, is more specifically that contingent generalisations cannot be known. The contingent generalisations that Hume has in mind are logically utterly simple in form: All Fs are G, or, X per cent of Fs are G. Philosophers who attempt to solve or dissolve the ostensible general problem of induction typically accept that theoretical inference is primarily simple enumerative induction to conclusions that are simply structured generalisations such as ‘All Fs are G’ or ‘X per cent of Fs are G.’ Much of the contemporary literature on laws of nature repeat this mistake, treating what laws are, or what laws

are extensionally, as simple generalisations, when in fact the most illuminating laws are (or are extensionally) significantly richer than this from the standpoint of logic. Typical conclusions, law-like or otherwise, that people infer to inductively in everyday life or in science are logically much richer than ‘All Fs are G’ or ‘X per cent of Fs are G.’ I intend to illustrate shortly why I say this. First, I shall discuss why it is important. 4. Logically richly structured generalisations are however, key to acts of measurement. If theoretical contentions were all generalisations of the extremely simple forms ‘All Fs are G’ or ‘X per cent of Fs are G,’ then it would be impossible even with theoretical contentions in tow to make non-trivial use of empirical facts in the deduction of other theoretical contentions. In short, measurements would be impossible. For in any measurement, our purpose is to deduce a theoretical conclusion from a phenomenon, in a way that employs a background of already theoretical further assumptions. I have discussed this point fully in earlier articles, thoroughly illustrating this conception of measurement by examples from science. For present purposes it is important to note how far Hume was from being the sort of thinker ever to have himself performed a scientific measurement. Hume’s unfamiliarity with experimental methodology shows itself to us in how he characterises as bare enumerative induction the supposed inference form for scientific theorising. Were our theoretical thinking never of the requisite logical richness for acts of measurement to be possible, we indeed would have to call induction merely a leap from the particular to the general, so that all inductions indeed would be simple enumerative inductions, and our situation would be every bit as hopeless as Hume contends that it is. But if our background assumptions include logically richer generalisations, then an actually demonstrative inference that uses empirical considerations to reach a theoretical conclusion does become possible. Thus, should I already believe that if any F is G then they all are – a generalisation that logically involves not one quantifier but two – and should I observe even one single F that is G, then it would be only logical for me to conclude that all Fs are G. An inference (either explicit or tacit) of such a form I call an ‘act of measurement.’ Taken on its own, such an inference is, of course, deductive rather than inductive. But inductions in everyday life and in science are typically reliant on a nexus of more or less careful measurements. Whenever a view in everyday life or in science seems to us careful and well considered, we call it ‘measured.’ And we call it this precisely because it is measured – not only in one, but in both senses of the word. None of the measurement inferences that support it is completely cogent on its own of course, because each of these inferences employs background assumptions that are fallible. Again, all this I have amply illustrated in earlier articles; here, it is enough to observe how completely Hume leaves it out of account. For, among other things, Hume blocks our considering generalisations that are of the logical richness that is requisite for any act of measurement. 5. In light of measurement, Hume’s fork is a false dichotomy. The inferences that people actually make show that Hume’s fork is a false dichotomy. And to say this is not simply tendentiously to invoke Immanuel Kant’s attack on the fork by his introduction of the synthetic a priori. Or if it is, it is to render the idea of the synthetic a priori down in part to an unexpectedly mundane consideration. Hume’s dichotomy is a false one because


from these supposed insights of Hume is to suggest that there is after all a logical function for evidence in relation to theory – but to suggest (falsely as it happens) that this function is purely critical, to test the theory. This suggestion is not only false, but in relation to the supposed problem of induction is also to no avail, as is sharply illustrated for us by the way that Popper’s antiinductivist philosophy notoriously cheats on itself, and thereby fails (a problem for Popper which I addressed in an earlier article). The falsificationist model of science, as conjectures and refutations, or guesses and tests, is untenable, and represents no solution at all to the question how science is possible. Yet philosophers of science standardly convince themselves to be hypothetico-deductivists, and thus to follow Popper at least to the extent of thinking that the only logical path is from theory to the evidence. This is, however, in effect a way of following Hume into his errors. 7. Hume sets us to considering an illusory issue: how there could be any warrant for the first-ever theoretical inference. Hume might reply as follows to my insistence that a considered view will always be a measured one, and thus that measurement inferences are often used. Since such inferences themselves rely on theoretical assumptions, it remains to explain how there can be a warrant for those. One is chased by this consideration ultimately to the question how the first-ever theoretical inference can have been warranted; and it is of course impossible that the first-ever theoretical inference could have been made in a measurement, or measured, way. Thus the suggestion would be that until we explain how we might have warranted the first-ever theoretical contention, we have no way of explaining how we might have warranted any theoretical contention whatsoever. The fact is, however, that human cognition is always already rich with theoretical contentions. The exercise of warranting any new belief depends on this being so. Thus, the supposed cognitive task of explaining how we might have warranted the first-ever theoretical contention is ill conceived and actually irrelevant. For example, it can be argued, following Kant, that a basic condition of the possibility of our cognising at all is that, as active subjects of cognition, we possess an intuitive notion of continuity. That is, a basic condition of the possibility of my cognising at all is that I know intuitively (from my synthetic apprehension of my own agency, which requires that I endure through time) what it is for an extension, viz., my own temporal extension, to be gapless. Arguably this rudimentary intuition of continuity is all I need if also I am to know intuitively what it would be for a point to move continuously in a space, describing a gapless line. Yet such an apprehension is from a logical standpoint vastly rich. Indeed it took people over two thousand years of very concerted theoretical endeavour to produce a full articulation of the meaning of continuity. But the success, eventually, by Augustin-Louis Cauchy and others, in producing such an articulation, was simply to being fully into view in all its logical richness a theoretical idea that is innate, in the sense that my possessing it stands as a necessary condition of the possibility of my cognising at all. This illustrates that for as long as I have been a thinker at all my thinking has (in the slogan of some non-analytic philosophers) ‘always already’ been theoretical in some seriously rich respects. So much the worse then, for Hume’s challenge to us to consider how the first-ever theory was warranted.

NZ

science teacher 119

historyphilosophyscience

any induction, apart from a facile simple enumerative one, will involve broader reaches of our presumed knowledge. It will combine a variety of implicit or explicit acts of measurement, each of them fallible and uncertain because of the fallible and uncertain theoretical assumption or assumptions that it employs. But the induction that combines these various implicit or explicit acts of measurement will be thus synthetic, and will prioritise various theoretical judgements that are already in place to the evaluation of the empirical evidence at hand. Hume is totally unprepared to acknowledge this kind of synthetic inference not only because of his fork, but also because acts of measurement would be impossible if every theoretical assumption had to have the simple logical structure ‘All Fs are G’ or ‘X per cent of Fs are G.’ 6. Hume fails to understand the logical form of measurement inferences. Hume’s discussion sets a trap for the unwary, so that whoever falls into it becomes either a sceptic like Hume, or at least a hypothetico-deductivist. But the hypothetico-deductivists likewise fail to understand the logical form of measurement inferences. Their position starts as a concession to Hume and fates them to follow Hume into his conundrums about induction. Hypothetico-deductivists have the grossly inadequate conception of the role of measurements in science, according to which measurements produce a merely elementary instance of a theory whose only logical function relatively to theory is to test it. Some, the ‘critical’ hypothetico-deductivists, such as Popper, believe that passed tests in no way elevate the probability that a theory is true. Other, ‘inductivist’ hypotheticodeductivists, believe that the successful passing of tests can confirm a theory, elevating the probability that it is true. Either way, hypothetico-deductivists insist that the logical connection between theory and evidence is from the theory (i.e. from the level of the general) to (particular) empirical predictions. They contend that there is no logical path from evidence (which is particular) to theory (which is general). They are thus quite evidently blind to the actual logical form of acts of measurement. For every measurement inference explores a logical path that is precisely from something empirical and particular to a general, theoretical, conclusion. Such an act of measurement of course depends as well on a host of theoretical background assumptions, for it is these that direct us to the salience for further theory of the empirical phenomenon in question. In order to appreciate the actual logical moment of individual acts of measurement one must not fall under the sway of hypothetico-deductivism. Two factors in philosophy of science pedagogy predispose philosophers to embrace hypothetico-deductivism, however. First of all, philosophers of science tend almost without exception to illustrate theories and theoretical laws using logically simple generalisations such as ‘All Fs are G’ or ‘X per cent of Fs are G.’ Evidently philosophers seldom attempt to estimate how many logical quantifiers it takes to render logically explicit the content even of a simple law of nature such as Newton’s second law, F = ma. (To do so, illustrates that theoretical thinking in science is typically exquisitely rich from the vantage point of logic). The second is that both Hume’s fork and Hume’s problem are easy to teach, and they seem to most philosophy teachers especially fetching examples of a philosophical insight. The easiest way to invite students to move on

39


NZ

science teacher

resources

119

Coastal Explorer NZCoast is a website, incorporating Coastal Explorer, that has been established by NIWA as the portal for information relevant to the New Zealand coastal environment and its associated hazards. Coastal Explorer is a great learning tool as Terry Hume, Doug Ramsay, Ude Shankar and Darcel Rickard explain: Introduction NZCoast is a website that has been established by NIWA as the portal for information relevant to the New Zealand coastal environment and its associated hazards. The primary aim of NZCoast: http://www.naturalhazards.net. nz/tools/nzcoast/home is to give resource management agencies – such as regional and district councils – robust, high level knowledge and tools to inform decisionmaking, manage coastal hazards, safeguard lives and property, and provide the public with educational information and resources. A first-time user to the site will see that NZCoast provides a range of services via links to these topics: • Coastal Explorer: a tool that allows the user to display information about the physical environment on maps and photographs and have access to information about those environments. • Tools and Visualisations: a function that links the user to the NIWA tide forecaster and wave hindcast maps. • Learn: a function that provides fact sheets, links, terms and definitions, and relevant references. At a basic level, NZCoast website provides answers to FAQs like “where does sand come from?” and “which beaches are dangerous to swimmers?” At a more detailed level, the origin of different beach types and how they function is explained.

Know your coast with Coastal Explorer Coastal Explorer is underpinned by a coastal classification and GIS (Global Information System) database. It classifies shores, maps where different environments occur and identifies hazards (e.g. coastal erosion and rip currents).

The information is provided in maps, data, images, references and models, which combine to show the diverse coastal environments which occur, provide descriptions of how different coastal environments function, and the hazards associated with different types of coastline. The geographic coverage includes the New Zealand mainland and offshore islands at a basic scale of 1:50,000. We have started by classifying and mapping the open coast sandy and gravelly beaches; the ultimate aim is a seamless mapping of the entire New Zealand coastline, incorporating also estuary/harbour shores and rocky shores. Coastal Explorer will, for the first time, ultimately provide an electronic map of the entire coastline in a nationally consistent scheme. It will enable rapid interrogation and extraction of information so that sections of the coast can be compared and analysed on the same basis. The first step in building Coastal Explorer was to create the coastal classification scheme and mapping procedures, during which we used expert panels including regional council staff; knowledgeable locals; university staff; and consultants. Information was mined from various sources including 1:50,000 topographic maps; aerial photographs; New Zealand Land Resources Inventory (NZLRI); the National Land Cover Data Base (LCDB); the New Zealand tidal model; wave hindcast models; RNZN Hydrographic charts; and numerous publications and reports. Building the database from this mixture of land and marine maps proved to be a very large task, as the maps came in paper and electronic formats and at differing scales. Importantly the process is revealing ‘blanks on the map’ and where more information is needed.

Entering Coastal Explorer Entry into Coastal Explorer launches a map of New Zealand and tools that enable the user to navigate around New Zealand, zoom into parts of the coast for more detail, and select and display various switchable layers of information (Figure 1). A split screen provides legend information, and clicking on various attributes in the legend brings up relevant information and definitions.

continued on page 41

40

Figure 1: Screen capture from Coastal Explorer showing locations of beaches in central New Zealand which are classified by type and which have beach type report cards.

Figure 2: Screen capture from Coastal Explorer showing report card for Marfell Beach near Lake Grassmere. This is a steep reflective mixed sand/gravel beach that is safe for bathing when the waves are small, but when the wave height exceeds 1 metre strong swash and backwash and dumping waves make it hazardous for swimming.


NZ

science teacher 119

by Melva Jones

Throughout the year, National Library’s School Services librarians receive requests from schools on the topic of Water. And what a huge topic this is! So, staff often has difficulty deciding what to send out if teachers give no further details. Here are some great books about water:

The water cycle While we have many books about the water cycle, there is a great series from PowerKids Press entitled ‘The water cycle.’ This series consists of four titles all by Isaac Nadeau and suitable for all levels, from primary up: Water in the atmosphere; Water in glaciers; Water on the move; and Water on the ground. Another great series is from Macmillan Library ‘Earth Cycles,’ and is written by Cheryl Jakab. One of the books is entitled The Water Cycle. This series has a more environmental focus with ideas on how to help protect Earth’s cycles, and is suitable for senior primary to junior secondary.

Water conservation ‘What if we do nothing?’ is another series with a conservation theme, suitable for a senior primary, junior secondary audience. Titles in this series look at the causes and effects of global problems and suggest solutions. Earth’s water crisis (by Rob Bowden), covers the world distribution of water, unsafe water supplies, factors that can contribute to a water shortage and possible safeguards and actions that may be taken by individuals. MacMillan Library has also published a series called ‘A water report’ which includes six titles by Michael and Jane Pelusey. These cover the availability of fresh water, how we use and manage water, recycling water and

water conservation. This is a well presented series which could be used from primary level and above.

resources

all about water

Animals and water Bobbie Kalman is an author who is producing a huge range of science titles for junior primary level students. These are clearly and simply written, well illustrated with sharp interesting photographs. And there is one about water too: Living things need water.

Practical science activities with water There are a lot of great books coming onto the market, all of which provide ideas for practical science activities. Children’s Press has a new series ‘Experiments with science,’ suitable for primary and intermediate students. The books in this series have experiments that are simple, safe and most use easy-to-access materials. For example in Just add water: science projects you can sink, squirt, splash, sail, the reader is invited to have fun with science. Each experiment is accompanied by a scientific explanation of what is happening. A slightly more senior series, ‘Science alive’ by Crabtree, covers most aspects of science including one entitled: Water. In this series, double page openings alternate between experiments and information about scientists and their discoveries. The science of water, by Steve Parker, from Heinemann’s ‘Tabletop Scientist’ series, gives clear simple instructions to enable students to explore some of the properties of water.

The Schools Collection The Schools Collection also has material on the different states of water: rain; floods (and droughts); rivers and oceans; and floating and sinking. All of these are popular topics and used by schools throughout the year. Teachers may borrow up to thirty items including video/DVD if available, and all items are issued for five weeks.

continued from page 40 The information can be overlaid on Google Maps, satellite images, or terrain which helps the user orientate themselves with respect to features they know such as towns, roads, or harbours and river mouths. The layers of information are drawn as lines of information about the shore or identified as points on the coast. Exposed and sheltered parts of the coast are mapped. Foreshore sediment types are identified as various combinations of mud, sand and gravel. The land backing the beach (hinterland) is classified as low-lying, wetlands, rising ground or cliffed coast. Coastal landforms are classified as various types of beach ridge and barrier dune systems. All these features need to be considered when making assessments of local susceptibility to hazards.

Be safe at the beach Coastal Explorer also provides a classification of beach types and beach hazards.This classification was developed in collaboration with the University of Sydney and Surf Life Saving New Zealand (SLSNZ). It groups New Zealand beaches into fourteen types on the basis of their wave, tide, and beach morphology, and sediment characteristics.

There is a beach hazard rating associated with each beach type for modal (most commonly occurring) wave conditions. The beach hazard rating takes into account hazards such as rips, surf zone currents, deep water nearshore and wave conditions. About two hundred and seventy beaches have been visited and classified during the process of developing and testing the classification. Coastal Explorer displays this information as beach report cards (Figure 2) which show a conceptual model of the beach, images of the beach, beach activities and facilities, and a scale showing how the hazard rating changes with changing wave conditions. These beach report cards provide a generalised risk assessment tool that is part of SLSNZ’s National Life Saving Plan. With the methodology now developed, NIWA and SLSNZ plan to classify further beaches this summer and further develop the database. For further information contact: t.hume@niwa.co.nz NIWA would like feedback about how this web site is being used so that they can adapt it to better inform teaching and learning programmes and other needs. Contact t.hume@niwa.co.nz

41


NZ

science teacher

justforstarters...

119

engineers and food research

by Ken Morison, Department of Chemical and Process Engineering, University of Canterbury The general public does not often associate chemical engineering with food processing, although process engineering has more obvious connections. Graduates in chemical and process engineering are often employed by the food industry to bring engineering expertise to the design, operation and optimisation of food processes. Below are some details of what chemical and process engineering researchers – both at the University of Canterbury and elsewhere – have been up to.

Drying fruit juice and stickiness The NZ dairy industry is good at spray-drying milk to produce powder, but drying apple or orange juice is a sticky problem. Because drying is so fast the crystals cannot grow, making the spray-dried juice like fine moist toffee; in fact there is no product as it all sticks to the dryer walls and pipes. Graduate student Kloyjai Cheuyglintase from Thailand, thought that mixing it with carrot fibre (a by-product of carrot juicing) would help, and indeed it did when used to dry apple juice. The carrot fibre seems to impose a structure on the sugars in the juice so that they are less sticky. The product was a ‘friable’ powder without an obvious carrot flavour, and it was readily reconstituted. A key tool in this work was the measurement of the glass transition temperature of the products. Toffee, for example, is a glass; it is so viscous that it cannot move. As it is heated up it starts to soften and become sticky. The temperature at which this happens is known as the glass transition temperature. If the toffee contains more water, it gets sticky at a lower temperature. That’s one reason we cook up toffee so much – to drive off as much moisture as possible to form a glass at room temperature. The same concepts are used within the dairy industry to ensure that the milk powder does not get too sticky. Some powders contain a lot of lactose and at certain moisture levels and temperatures they get sticky too.

Keeping milk evaporator wet

42

On a windy wet day rainwater might stream down your windows. With a little rain, little rivers (or rivulets) might form, but with a heavy downpour there might be enough water to maintain a complete film of water over the entire window. The same phenomenon occurs inside a milk evaporator as milk flows down the inside of steam heated tubes. The conditions required to achieve a complete film are the subject of research at the University of Canterbury. If a complete film is not formed, the milk can dry off and build up. The minimum flow rate required for a complete film depends on the viscosity, density, surface tension and contact angle of the liquid. The contact angle is the angle of the intersection between the surface of a liquid droplet and a solid surface; it depends on the nature of the surface as well as the liquid. The minimum flow rate is also very dependent on how the liquid gets spread out at the top of the surface. This project is an example of how the fundamental properties of a liquid and a surface impact on the successful design of industrial equipment. The results showed that the most difficult part is to get the milk to

spread out at the top of evaporator tubes.

Fouling and cleaning Just boil some milk in a pot and then wash it clean, and you will get an idea of the problems faced by the dairy industry every day. Milk proteins will happily stick to any surface with a temperature of more than 70°C. When milk is heated, high surface temperatures cannot always be avoided, so during pasteurisation and evaporation some of the milk always sticks to the stainless steel surfaces. After ten to twenty hours the plant must be shut down and cleaned with sodium hydroxide solution and acid to remove the milk deposit. Both fouling reduction and faster cleaning are the subjects of research in many universities. At Canterbury, we have been trying to apply a single layer of poly ethylene glycol molecules to prevent the proteins sticking. Sometimes it works, but not for all conditions. To enhance cleaning we have looked at the effect of temperature, concentration and type of cleaning solution of the cleaning rates. As others have found in the past, higher concentrations are not always better. Cleaning solutions with more than about 2% sodium hydroxide just turn milk protein into a sticky gel that is slow to remove. Lower concentrations are more effective, less costly and easier to treat for disposal.

Potatoes Most of us just eat potatoes, but engineers like to write equations to model them. Consider the process of deep-frying potatoes. While you are waiting for the hot chips, inside the potatoes there are transfer of heat, flow of water, vaporisation of water, flow of water vapour, flow of oil, and reactions that are cooking the potato. All these processes are influenced by each other. Some chemical engineers from the USA have put together a mathematical model to describe it all (Food and Bioproducts Processing, v85, p209). Why did they bother? Potato chips are well up on the list of foods that are high in fat, and people have also been concerned about reactions leading to the production of acrylamide that might be carcinogenic. Here’s one of the many equations they used; it shows that there are mathematics challenges in engineering (but equations can be very effective sleeping pills too!). ∂ ∂t

(

φ Sgρg

)

+ — . − ρg

krp.g kinp.g − μg

)

—P =i

The process of mathematical modelling is one that forces engineers to question their understanding of the processes. Some would say that if we can’t model it, we don’t understand it. One of the conclusions from this type of work is that potato chips do not get fatty until they are removed from the cooking oil. When the water vapour inside the chips cools down, it condenses and sucks in fat. Draining chips while they are hot is essential; draining chips in a vacuum would be even better. For information about careers in food technology visit: www.nzifst.org.nz


NZ

science teacher 119

by Jacqui Bay Attendance at international conferences is an essential part of a scientific career, and one that New Zealanders have been getting a taste for recently. From Queensland to Mumbai, opportunities have abounded for young biologists who have worked hard to prove their worthiness to represent New Zealand, won international respect for their contributions and gained insight into the world of professional scientists.

NZ Biology Olympiad team The New Zealand International Biology Olympiad team successfully competed against the top students from fifty-five nations at the International Biology Olympiad held in India recently. Ben Paterson (Kings’ College), won a silver medal; and Amanda Deacon (Burnside High School), Chloe English (Christchurch Girls’ High School) and Jessica Shailer (Palmerston North Girls’ High School) each brought home a bronze medal. The challenging competition tested both their theoretical and practical biological knowledge in a series of laboratory tasks and exams that covered topics as diverse as the behaviour of Siamese fighting fish and advanced genetics. The students thrived on the academic challenge and were inspired by meeting other talented young biologists from around the world.The students were accompanied by team leaders; Drs Angela Sharples (Rotorua Girls’ High School), and Steve Chambers (Unitec Institute of Technology). They were assisted with funding to travel to Mumbai through the Talented School Students’ Travel Award, administered by the Royal Society of New Zealand.

biology

NZ biology students on world stage inspired us about biotechnology and biomedical science and its role in the future.” The students also participated in experimental work and enjoyed tours of many leading research laboratories. The week concluded with a mock summit for the United Nations where students represented different countries and spent the day debating the ethics of stem cell research. The students all agreed that the week was an exciting and inspirational glimpse into the professional world of scientific research.

Biofutures 2008.

Australasian Brain Bee Challenge Stephen Mackereth (Kings College, Auckland), and Kieran Bunn (Logan Park High School, Dunedin) represented New Zealand at the Australasian Brain Bee Challenge finals held at the Queensland Brain Institute, having both competed at local events in Auckland and Otago where fourteen hundred students competed for the right to represent New Zealand on the world stage. Stephen won his way through two rounds of neuroscience questions, an anatomy exam and a doctorpatient diagnosis test to really impress the judges with his outstanding knowledge of brain function and disease and win the NZ title and the right to compete at the International Brain Bee Challenge in Baltimore in 2009.

The triumphant New Zealand team. L to R: Ben Paterson (Kings’ College), Jessica Shailer (Palmerston North Girls’ High School), Chloe English (Christchurch Girls’ High School) and Amanda Deacon (Burnside High School).

Biofutures A contingent of twelve New Zealand students joined with students from throughout Australia at Biofutures 2008. During the conference, students visited three different universities in Brisbane and the Gold Coast, and had the opportunity to hear from leading researchers. Jessica Bird, a member of the NZ group, was inspired by the experience. “We heard from scientists who are searching for treatments for cancer, and others who

Australasian Brain Bee Finalists. Top left: Casey Linton (QLD); Yasmin Soliman (WA); Hayden Lee (ACT); Katie Dyke (TAS). Bottom left: Jayson Jeganathan (NSW); Kieran Bunn (South Island, NZ); Jack Lowe (SA); Stephen Mackereth (North Island, NZ); and Stephanie Mercuri (VIC).

43


NZ

science teacher

chemistry

119

websites, and other matters Useful Websites Teachers from NZ attending international science education conferences in the last couple of months have suggested the following useful websites:

1. www.chemheritage.org It is well worth spending time exploring this website of the Chemical Heritage Foundation. Available online is a range of classroom activities, articles from the Chemistry Heritage News magazine, artworks and photos of exhibitions. Of particular interest for teachers and students will be the stories about the history of the development of many of the chemical ideas and useful chemicals found in today’s society. Molecular milestones documents stories from Pre-Chemistry, Early Chemistry and Modern Chemistry, with titles such as Aspirin and heroin: one man invents two pain relievers in two weeks, Brown Teeth Have Fewer Cavities, Anesthesia: Making Surgery More Bearable and Tetra-Ethyl Lead: The End of an Era for a Well-Known Molecule. There are time-lines and accompanying information related to ‘matter and molecules’ – giving a broad overview of achievement in the chemical and molecular sciences, from the chemical revolution of the eighteenth century to today; molecules of life - which introduces organic chemistry, biochemistry, respiration, photosynthesis, pharmaceuticals, biotechnology and genetics, diseases and viruses and more by meeting some of the scientists who have made the path-breaking discoveries that have enabled us to live longer and healthier lives; and the development of ‘polymers and nanotechnology.’ Online tools for teachers provide more stories about the people behind the discoveries, along with useful teaching modules and webquests.

2. www.parsel.eu The PARSEL project (Popularity and Relevance of Science Education for Science Literacy) aims to translate, test and disseminate best practice modules from across a number of European countries to improve scientific literacy. Modules are available for science, chemistry, physics, and biology lessons.They are usually designed for four lessons and are written for Years 9 to 13. Each module begins with a scenario aimed to be relevant to the lives of the students. This leads into inquiry-based problem solving and socio-scientific decision making. Some of the translations are a bit strange, but there are useful ideas that address the nature of science achievement objectives in the new curriculum.

by Suzanne Boniface

http://www.chemistryteachers.org – Chemistry Teachers website has a great selection of teacher resources. http://www.practicalchemistry.org – Practical Chemistry website has many practicals, well sorted with laboratory writeups. http://www.chemit.com/ – ChemIT website http://www.chemsoc.org/networks/ learnnet/ptdata – Periodic Table of Data website is excellent for periodic table, data, graphing trends etc. http://www.presentingscience.com/thermo – Thermodynamics development site has interesting data and interactives for a range of equilibria where variables can be manipulated to introduce Le Chatelier’s principle and consider changes in equilibrium constant and temperature. Advanced Chemistry Development site (http://acdlabs. com/) has a link to a great free downloadable chemical drawing and molecular modeling programme: http:// acdlabs.com/download/chemsk.html

Food for Thought Some interesting facts from an international chemistry education conference: • American youth spend more time watching television than in school • by 2010, 90% of the world’s scientists will live in Asia • 15% of all US graduates major in Science, Engineering or Mathematics, compared with South Korea: 38%; France: 47%; China: 50%; and Singapore; 67% • surveys of US students suggest that they avoid science because it is too difficult (44%), not interesting (17%), not taught in an engaging way (16%), could hurt grade point average (10%).

Chemistry Olympiad TeamWins Bronze

3. Specifically for chemistry teachers: http://www.rsc.org – the Royal Society of Chemistry website which has excellent links and activities. http://www.rsc-oilstrike.org/ – Oil Strike is a good game. http://www.chemistryandsport.org – Chemistry and Sport has links to sport as a context for chemistry.

44

The 2008 New Zealand Chemistry Olympiad Team (L to R): Tim Vogel, Emily Adlam, Wenyi Yi, and Sava Mihic.


NZ

science teacher 119

Keen to become involved in the International Young Physics Tournament (IYPT), Kent Hogan of Onslow College, Wellington, invited some sixth-form physics students to ‘have a go’ at the problems for the 2008 competition. That was late October 2007, and one problem, ‘spinning ice’ (see below), quickly captured the students’ fascination leading them all on a journey of discovery and exploration of otherwise unknown aspects of physics. A member of Kent’s team was selected for the NZ IYPT squad that won silver at this year’s competition in Croatia. Spinning Ice: Pour very hot water into a cup and stir it so the water rotates slowly. Place a small ice cube at the centre of the rotating water. The ice cube will spin faster than the water around it. Investigate the parameters that influence the ice rotation.

by Kerry Parker

a strict protocol and is judged by a panel of teachers and physicists. After a twelve minute presentation, the challenging team reviews their opponents’ report and debate for the validity of the solution.

physics

IYPT is not just for geeks

The IYPT 2009 Problems – two examples 1. Skateboarder: A skateboarder on a horizontal surface can accelerate from rest just by moving the body, without touching external support. Investigate the parameters that affect the motion of a skateboard propelled by this method. 2. Electromagnetic motor: Attach a strong light magnet to the head of a steel screw. The screw can now hang from the terminal of a battery. Completing the circuit by a sliding contact on the magnet causes the screw to rotate. Investigate the parameters that determine the angular velocity of the screw.

Back to the spinning ice So what is the IYPT? The IYPT originated in the former USSR and was aimed at fostering scientific research and international communication in Physics. Now dubbed the ‘Physics World Cup’ it is an annual event that attracts teams of five secondary school students from twenty-eight countries. In preparation for the event, theoretical and experimental research problems are released the year prior to the competition. And each team brings to the competition their results for nineteen of these problems. (The 2009 IYPT problems have now been released). At the competition teams must present and defend the validity of their solutions in what are called the ‘Physics Fights.’ Juries consisting of physicists and physics teachers, rate both the teams’ reports and the discussion they generate. New Zealand was first represented in 2003 by a team from King’s College at the competition in Uppsala Sweden. Since that time, the competition has gained huge support in NZ with inter-school and, more recently, regional tournaments. Last year, NZ won a silver medal in Korea, its best performance to date. This year, the competition was held in Croatia, and after reaching the finals for a second year in a row (the only country to achieve this last year) we again won a silver medal. The IYPT now has the support of the New Zealand Institute of Physics.

Te Pa¯tuki o nga¯ Hinenagaro Ahupu¯ngao Selection for the 2009 NZ squad to attend the IYPT in Beijing will take place at Te Pa¯tuki o nga¯ Hinenagaro Ahupu¯ngao (The battle of the intellect in Physics). Following a similar format to the IYPT, teams must bring and defend their solutions for the selected nine problems to one of the regional tournaments that will be held in Auckland, Wellington and Christchurch on Saturday 7th March 2009. The tournament consists of a series of ‘Physics Fights’ in which teams of three students challenge each other to present their solutions. Each ‘Physics Fight’ follows

After using coloured dye and analysing the video footage, the Onslow College students reasoned that the downwards convection of melting water from the ice caused an acceleration of the ‘tornado’ in the spinning water. Kent was staggered at the way the competition changed his students. And he was also amazed at how much of their learning fitted the Nature of Science strand in the new NZ curriculum. “They may not be the cleverest students, but they became the best physicists I have ever taught.” Although Onslow College didn’t win the NZ tournament, one of their team members Graeme Finney, was selected to represent New Zealand in Croatia and his communication skills, especially debating, helped the NZ team secure a silver medal at the 2008 IYPT.

How to get involved in IYPT Begin by showing some of next year’s problems to keen Year 11 and 12 students, but remember there is no ‘right’ answer, and not everything can be found on the Internet. Also invite other staff or scientists to help you. The best students for this tournament are those who have not lost their natural curiosity, are good lateral thinkers, and who are able to take the initiative to explore these ideas. It’s not about being clever, just curious and interested. The role of the teacher is simply as a helper and to ensure that from time to time the students are able to access a lab or given an overview of concepts not covered in their NCEA course, such as surface tension. One of the key benefits of running this competition in your school is that it will take both you and your students out of your comfort zones because there is no teacher guide or answer book! And should you want some collegial support and guidance, both Paul Haines (Kings College) and myself are very willing to help! For more details, FAQs and videos showing ‘Physics Fights’ in action visit: http://www.iypt.org.nz/ For further information contact: Kerry.parker@correspondence.school.nz

45


NZ

science teacher

primaryscience

119

46

concepts of science through picture books by Mary Loveless Many primary teachers approach their science teaching programmes by identifying a topic title such as forces, electricity or friction. This approach identifies a broad label but does not give any indication of the intended science learning, and often develops into a teaching unit that is a collection of activities loosely linked to the topic. Such a label also does nothing to enthuse and motivate students about the exciting world of physics, chemistry, Earth science or biology. An alternative approach to planning is to identify the big science ideas or concepts and devise a teaching unit to explore and investigate these ideas. The teaching and learning focus is then centred on a specific concept and the selection of suitable activities and the assessment of the student learning simplified. Subsequently, the process of exploring the science ideas in depth becomes easier without the temptation of going off on a tangent. Exploring and sharing the wonderful world of picture books with students can provide an exciting introduction to many science concepts. Picture books also provide a framework that enthuses and motivates students to ask questions, investigate their ideas and suggest solutions. That is, using literacy to communicate and develop science understanding. One book that can be used to support this approach is The Lighthouse Keepers Lunch, a delightful story about Mr and Mrs Grinling and the problem of delivering Mr Grinling’s lunch while he is tending the light. Mrs Grinling dispatches his delicious lunch across a wire between the cottage and the lighthouse, but the clever seagulls have discovered this wonderful source of food that appears regularly every day and raid the basket, much to the chagrin of Mr and Mrs Grinling. The big question is: how to outwit the seagulls? The story provides a wonderful opportunity to explore moving things: forces; pushes and pulls; friction; and circuits; and in the process ensuring that the lighthouse keeper gets his delicious lunch. This teaching strategy utilizes the medium of picture books to motivate students to ask questions, pose possible solutions and initiate their curiosity about the world of physics. The Lighthouse Keeper’s Lunch leads itself to the concept of energy is all around us and the science ideas: • energy has many forms • energy can be changed from one form to another • when forces do work, energy changes from one form to another • relationships exist between the energy source and its effect. Reading the book raises some interesting science

ideas and questions about the science involved with lighthouses and rope systems. Some questions that students might ask could be: • How does the light in the lighthouse work? • How far does the light shine? • How tall can you build a lighthouse? • Would the light go out in the lighthouse if the electrical circuit was covered in salt from the sea spray? • What is a pulley? Could Mr Grinling use a pulley to travel across to the lighthouse to fetch his lunch? • How could Mr Grinling get messages back to the cottage? • How could you get the basket back to the cottage along the wire? • Could we use a pulley system to send messages to each other? • And……… there are many more. It is also important to link the science to some real life examples where wheels and pulleys are used such as: • gate shutting systems such as pool gates • block and tackle to lift car engines • sailing tackle to hoist sails on yachts • closing and opening Roman and Holland blinds • turn tables – rotary cow shed, record players, microwave. Some possible activities to help find out about the science involved in lighthouses − but still keeping the focus on the key science concept that energy is all around us − include: a teaching sequence involving activities to explore friction between ropes and pulleys; shapes that give structures strength; components of a circuit; insulators and conductors; and how the electrical energy is transmitted around a circuit. After exploring a variety of activities and investigating their questions, students could record their findings and make suggestions for the question: What is the best solution to try to outwit the persistent seagulls? How to share the findings? Because it is all about communicating in science, try: digital stories; drawings and diagrams; big books; and transactional texts. Just maybe, Mr Grinling will have to nip back across the wire to collect his lunch himself. So, if we watch closely we might just see a beaming lighthouse keeper riding the wire high above the sea as he nips across to the cottage to collect his lunch basket full of sumptuous goodies!

References Armitage, R., & Armitage, D. The Lighthouse Keeper’s Lunch, Scholastic Children’s Books, ISBN 0 590 55175 2.


NZ

science teacher 119

Space travel seems to be back in vogue with the spectre of being able to experience weightlessness on the upper edges of the atmosphere becoming ever closer, and talk about manned space travel to the moon, and even to Mars. So we are again turning our eyes to space. But have we ever considered that we are in fact all space travellers, all six billion of us? We are all simply travellers on Spaceship Earth, but with one difference – we have no safety ship or escape hatch when things go wrong. In the Science curriculum, the Planet Earth and Beyond (PEB) strand is where students begin to learn about how our planet works. As we implement the new curriculum, this strand must be given some focus in our schools. We need to make sure that it is not relegated to second tier status and placed in the too hard basket, with the separate science disciplines of physics, chemistry and biology being seen as more important.

Earth systems science (ESS) The central big idea in the PEB strand is the idea of Earth Systems Science (ESS) the fours spheres of our planet − the hydrosphere, biosphere, geosphere and atmosphere − and how they work together to keep our planet in finely-tuned balance. This enables a more integrated approach towards planetary concepts and how our planet works. Tying the four spheres together are the major cycles: the water cycle; and the carbon cycle. Here, students can begin to understand important planetary issues such as climate change, and in learning about deep time they see how the past can inform the present and the future.

Four key cycles There are four big processes or concepts that every world citizen must understand if they are really going to be true participants in this planet’s future: water (hydrological) cycle; carbon cycle; climate change; and deep time. Although it is ideal to teach these as part of a PEB course, key ideas can also be taught in Living World, Material World and Physical Worlds topics/units. For example, the carbon cycle can easily be taught in chemistry or biology, the water cycle in biology and physics, and deep time in biology. Also concepts such as density and convection can easily have a PEB context.

Water cycle However, all these have to be taught with care. The water (or hydrological) cycle is a case in point, as this

by Keith Hartle and Jenny Pollock cycle affects − and is affected by − human activities. Yet it is important that students develop an appreciation of how this cycle works so that they become aware of both the possible consequences of human activities and important planetary processes. A good starting point is the NZCER resource on this topic. Visit: http://arb.nzcer. org.nz/supportmaterials/science/water_cycle.php The NZCER website highlights how easy it is for children to develop misconceptions about this all-important cycle, and for teachers not to realize their students might have these misconceptions. And this may be true for many concepts that we think students have ‘got’ – but haven’t. For example, how many students realize that the amount of water in the world is finite? That energy from the sun drives our weather? That water’s heat capacity lends itself to doing some extraordinary things in terms of heat transfer from the oceans to the atmosphere? That the water cycle is involved with the distribution of heat around the planet through atmospheric and oceanic processes? Or how water and other factors contribute to erosion?

science/PEB

some thoughts …

Carbon cycle Like the water cycle, the carbon cycle links all four of the Earth’s spheres, and also the science disciplines. Yet the carbon cycle is all too often taught in separate disciplines. But this cycle is far more complex, and is an ideal cycle to teach as a PEB topic. And the carbon cycle is a good interdisciplinary science topic science including some important ‘big ideas’ such as the different reservoirs of carbon, and the fact that the ocean and the ocean floor sediments act as a giant buffer. This cycle also has many links with our current way of life, namely…our reliance on fossil fuels and the link with climate change.

Climate change and deep time Climate change is of course a very important presentday issue that we must inform students about. Their understanding can be gradually built upon as the student progresses through the school, such that by the senior school years students should be able to learn and understand that past climate change can inform the present. And they too begin to understand the concept of ‘deep time.’ Earth’s geological record is a treasure trove of information about past climates, and enables us to build up a picture of life on Earth at that time. This information then gives us a benchmark against which to measure present climate change and its determinants.

Let’s get sharing Do you have any successful ideas and activities that you would be happy to share with us? Please send an electronic version and we will print some of them on this page in future issues. So let’s begin to share our ideas about essential learning and understanding for students. For further information and/or to submit your ideas and activities contact: jenny.pollock@xtra.co.nz

47


NZ

science teacher

technicians

119

48

SciTech 2007

by Raewyn Keene, Co-convener

SciTech is the biennial school science technicians’ Conference, and was held at St Peter’s school in Cambridge from 3-5 October 2007. It was organized by a group of Hamilton technicians in association with the Science Technicians Association of New Zealand (STANZ, a standing committee of the NZASE), and administrative support was provided by the Bay of Plenty Technicians’ group. The purpose of the Conference was to provide an opportunity for school science technicians to keep up-to-date on changes in education, and to allow them to further develop their professional development. This enhances their ability to provide support for effective teaching and learning in the classroom. Areas identified by the committee that needed to be addressed included: • HSNO legislation – focus was on practical aspect of implementing this legislation, particularly in relation to chemical storage • curriculum changes – particularly in relation to environmental education • NCEA – practical assessments and areas such as the microbiology content of Year 11 science. This was achieved not only through keynote speakers and workshop sessions, but also by allowing time for technicians to network and learn from each other. During the Conference, STANZ convened its first AGM. SciTech 07 was well attended with one hundred and twenty delegates, including five delegates from Australia. Although most of the delegates were from secondary schools, there were also a number who attended from tertiary institutions and ‘middle’ [sic] schools. The Powhiri to welcome delegates, performed by Waikato Tainui representatives, was an exceptional occasion & very much appreciated by NZ and Australian delegates alike. The Mayor of Hamilton, Mr Bob Simcock, formally opened the Conference, and Detective Sergeant Nicolas McLeay was the opening keynote speaker. He gave a riveting address on clandestine drugs in NZ which was of particular interest to delegates as secondary schools are often targeted for chemicals and equipment to use in this manufacturing process. There were two days of practical workshops which covered the physical, material and chemical worlds. Katherine Hicks, of the Royal Society of New Zealand, gave a wonderful presentation on environmental education and EMAP. This session was followed up and enhanced by field trips which covered the formation of the Waikato River – all the while cruising on the Waipa Delta paddle steamer. Its importance, both physically and culturally to the Tainui people, was also covered. Other workshops designed to support Katherine’s presentation included practical data logging sessions and plant and rock identification workshops run by Waikato University. Field trips to the Ruakura Research Station, and the Hamilton Zoo also enabled delegates to experience the Waikato region. Delegates were provided with a comprehensive manual of all workshop information, and it was particularly useful

to be able to provide everyone with a copy of the Code of Practice for schools as Exempt Laboratories. This was referred to by Nigel McCarter both in his keynote address and the workshop which followed. Trade displays are an important part of any conference as they provide an opportunity to learn about new and exciting products available to schools. Science Technicians are often expected to do purchasing for the science department, and when trade display holders were asked to do individual presentations on their products, it was extremely well received by delegates. We hope this will be a regular feature in future conferences. The theme for the Conference dinner was ‘Represent your region.’ The Waikato Mascot Mooloo greeted delegates, and guest speaker for the evening was local TVNZ celebrity Kaye Gregory. Delegates embraced the theme wholeheartedly. During the inaugural STANZ AGM the following new officers elected were: President: Margaret Garnett (Christ’s College, Christchurch); Past President: Raewyn Keene (St John’s College, Hamilton); Secretary: Annette Hobby (Shirley Boys’ High School, Christchurch); Treasurer: Netta Brown (Taradale High School, Hawke’s Bay); Communication/ Database: Robyn Eden (St Margaret College, Wellington); Advocacy: Ian De Stigter (Mt Albert Grammar, Auckland); Conference Convener 2009: Beryl Mc Kinnell (Papatoetoe High School, Auckland) The first successful recipient of the STANZ scholarship was Anita Baines, which enabled her to attend this Conference. Major sponsors of the Conference were the Ministry of Education; Royal Society; SciTech NZ Ltd; WSTA; NZASE; and Environment Waikato. Their generosity contributed to the success of this Conference and they must be acknowledged. We would like to acknowledge Bev Cooper (past President NZASE) for her advice and support, especially for her management and help with the STANZ AGM, and during the planning process of this Conference. In conclusion, SciTech 07 was extremely successful. It is essential to continue to improve the education of technicians in NZ. This has become especially timely and important as all science staff must become familiar with legislation pertaining to chemical handling, Health and Safety and curriculum changes. NCEA, and the need for technicians to be able to support students’ individual experiments across all science disciplines at a senior level, is also a test of the ability of technicians. Conferences, therefore, become very important and valuable learning opportunities. The SciTech 07 organizing committee sincerely thank all those who contributed to the success of the Conference, and we look forward and promote the next Science Technician Conference, CONSTANZ in Auckland 2009. For further information contact mgarnett@christscollege.com


Biolive 2009

Transformation and Change

PROFESSIONAL DEVELOPMENT IN PRIMARY SCIENCE

5 to 8 July 2009

2009 Dates Dunedin – 14th & 15th April • Christchurch – 16th & 17th April Wellington – 20th & 21st April • Auckland – 23rd & 24th April

University of Otago, St David Lecture Theatre Complex This conference will be held in conjunction with the annual meeting of BEANZ (Biology Educators of New Zealand) and will be hosted by the University of Otago. Nationally acclaimed biological scientists will present keynote speeches on the theme ‘Transformation and Change.’ Conference delegates will be able to participate in a wide range of workshops and fieldtrips in anthropology, biochemistry, botany, marine science, microbiology, physiology and zoology. There will also be a focus on updating current thinking on teaching and learning processes for the 21st century learner.

For teachers who are motivated and interested in: • developing active learning strategies to enhance children’s learning • the importance of providing contextual science experiences: science in a learner’s world • reflecting on current trends in science teaching and relating it to their own practice • taking part in practical workshops that explore the theme of the conference • identifying explicit links between teaching and learning in science education and the key competencies and values

For further information contact the conference convenors: kate.rice@otago.ac.nz or karyn.fielding@otago.ac.nz

National NZIP Conference incorporating

The Science Technicians’ Association of NZ Conference 2009, Auckland ‘Earth, Wind and Fire’ 7 to 9 October 2009 This Conference will appeal to all school science technicians, and also some technicians from tertiary institutions (such as Polytechnics) For further information contact the Convenor, Beryl McKinnell bemckinnell@papatoetoehigh.school.nz

CONSTANZ ‘09

NZASE Conferences 2009 Primary Science Conference Dunedin (14 to 15 April); Christchurch (16 to 17 April); Wellington (20 to 21 April); Auckland (23 to 24 April).

Biolive 2009 Transformation and Change, University of Otago, Dunedin Date: 5 to 8 July 2009

ChemEd 09 University of Canterbury, Christchurch Date 5 to 8 July, 2009

NZIP incorporating Physikos 09 University of Canterbury, Christchurch Date: 6 to 8 July, 2009

CONSTANZ 09 Auckland. Date: 7 to 9 October 2009

Physikos ‘09

The 14th National NZ Institute of Physics Conference, incorporating Physikos, the NZ Physics Teachers’ Conference

6-8 July 2009 University of Canterbury, Christchurch Energise your physics teaching with three days of ideas, stimulation and interactions! For further details visit: www.nzip.org.nz

9 ChemEd 09 0 Ed

m e h C emEd 09 Ch d 09 E m e Ch emEd 09 Ch ‘Chemistry on the Edge’ 5 to 8 July, 2009

University of Canterbury, Christchurch For further information contact: Richard Rendle Tel: 03 3597275 Fax: 03 3597248, email: rendle@xtra.co.nz


FEED YOUR MIND E X P L O R I N G

N E W

Z E A L A N D

In the September – October issue we look at the big issue of our time, climate change! What will the weather be like in a few decades? How will it affect our flora and fauna? What are we doing about it? Plus we’ve included a double-sided map of the poles—free with every copy!

And while New Zealand could escape the worst effects of warming, other parts of the globe will be less fortunate. We take a look at four continents, Europe, North America, Antarctica and Africa, in a special joint feature with our companion publications.

DON’T MISS YOUR COPY!

SUBSCRIBE TODAY Freephone: 0800 782 436, subscribe online: www.nzgeographic.co.nz or email: subs@nzgeographic.co.nz


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.